text
stringlengths
0
164k
<s>[INST] Minimally Supervised Learning of Affective Events Using Discourse Relations [/INST] Affective events BIBREF0 are events that typically affect people in positive or negative ways For example getting money and playing sports are usually positive to the experiencers catching cold and losing ones wallet are negative Understanding affective events is important to various natural language processing NLP applications such as dialogue systems BIBREF1 questionanswering systems BIBREF2 and humor recognition BIBREF3 In this paper we work on recognizing the polarity of an affective event that is represented by a score ranging from 1 negative to 1 positive Learning affective events is challenging because as the examples above suggest the polarity of an event is not necessarily predictable from its constituent words Combined with the unbounded combinatorial nature of language the noncompositionality of affective polarity entails the need for large amounts of world knowledge which can hardly be learned from small annotated data In this paper we propose a simple and effective method for learning affective events that only requires a very small seed lexicon and a large raw corpus As illustrated in Figure FIGREF1 our key idea is that we can exploit discourse relations BIBREF4 to efficiently propagate polarity from seed predicates that directly report ones emotions eg to be glad is positive Suppose that events x1 are x2 are in the discourse relation of Cause ie x1 causes x2 If the seed lexicon suggests x2 is positive x1 is also likely to be positive because it triggers the positive emotion The fact that x2 is known to be negative indicates the negative polarity of x1 Similarly if x1 and x2 are in the discourse relation of Concession ie x2 in spite of x1 the reverse of x2s polarity can be propagated to x1 Even if x2s polarity is not known in advance we can exploit the tendency of x1 and x2 to be of the same polarity for Cause or of the reverse polarity for Concession although the heuristic is not exempt from counterexamples We transform this idea into objective functions and train neural network models that predict the polarity of a given event We trained the models using a Japanese web corpus Given the minimum amount of supervision they performed well In addition the combination of annotated and unannotated data yielded a gain over a purely supervised baseline when labeled data were small Learning affective events is closely related to sentiment analysis Whereas sentiment analysis usually focuses on the polarity of what are described eg movies we work on how people are typically affected by events In sentiment analysis much attention has been paid to compositionality Wordlevel polarity BIBREF5 BIBREF6 BIBREF7 and the roles of negation and intensification BIBREF8 BIBREF6 BIBREF9 are among the most important topics In contrast we are more interested in recognizing the sentiment polarity of an event that pertains to commonsense knowledge eg getting money and catching cold Label propagation from seed instances is a common approach to inducing sentiment polarities While BIBREF5 and BIBREF10 worked on word and phraselevel polarities BIBREF0 dealt with eventlevel polarities BIBREF5 and BIBREF10 linked instances using cooccurrence information andor phraselevel coordinations eg A and B and A but B We shift our scope to event pairs that are more complex than phrase pairs and consequently exploit discourse connectives as eventlevel counterparts of phraselevel conjunctions BIBREF0 constructed a network of events using word embeddingderived similarities Compared with this method our discourse relationbased linking of events is much simpler and more intuitive Some previous studies made use of document structure to understand the sentiment BIBREF11 proposed a sentimentspecific pretraining strategy using unlabeled dialog data tweetreply pairs BIBREF12 proposed a method of building a polaritytagged corpus ACP Corpus They automatically gathered sentences that had positive or negative opinions utilizing HTML layout structures in addition to linguistic patterns Our method depends only on raw texts and thus has wider applicability Our goal is to learn the polarity function px which predicts the sentiment polarity score of an event x We approximate px by a neural network with the following form rm Encoder outputs a vector representation of the event x rm Linear is a fullyconnected layer and transforms the representation into a scalar rm tanh is the hyperbolic tangent and transforms the scalar into a score ranging from 1 to 1 In Section SECREF21 we consider two specific implementations of rm Encoder Our method requires a very small seed lexicon and a large raw corpus We assume that we can automatically extract discoursetagged event pairs xi1 xi2 i1 cdots from the raw corpus We refer to xi1 and xi2 as former and latter events respectively As shown in Figure FIGREF1 we limit our scope to two discourse relations Cause and Concession The seed lexicon consists of positive and negative predicates If the predicate of an extracted event is in the seed lexicon and does not involve complex phenomena like negation we assign the corresponding polarity score 1 for positive events and 1 for negative events to the event We expect the model to automatically learn complex phenomena through label propagation Based on the availability of scores and the types of discourse relations we classify the extracted event pairs into the following three types The seed lexicon matches 1 the latter event but 2 not the former event and 3 their discourse relation type is Cause or Concession If the discourse relation type is Cause the former event is given the same score as the latter Likewise if the discourse relation type is Concession the former event is given the opposite of the latters score They are used as reference scores during training The seed lexicon matches neither the former nor the latter event and their discourse relation type is Cause We assume the two events have the same polarities The seed lexicon matches neither the former nor the latter event and their discourse relation type is Concession We assume the two events have the reversed polarities Using AL CA and CO data we optimize the parameters of the polarity function px We define a loss function for each of the three types of event pairs and sum up the multiple loss functions We use mean squared error to construct loss functions For the AL data the loss function is defined as where xi1 and xi2 are the ith pair of the AL data ri1 and ri2 are the automaticallyassigned scores of xi1 and xi2 respectively Nrm AL is the total number of AL pairs and lambda rm AL is a hyperparameter For the CA data the loss function is defined as yi1 and yi2 are the ith pair of the CA pairs Nrm CA is the total number of CA pairs lambda rm CA and mu are hyperparameters The first term makes the scores of the two events closer while the second term prevents the scores from shrinking to zero The loss function for the CO data is defined analogously The difference is that the first term makes the scores of the two events distant from each other As a raw corpus we used a Japanese web corpus that was compiled through the procedures proposed by BIBREF13 To extract event pairs tagged with discourse relations we used the Japanese dependency parser KNP and inhouse postprocessing scripts BIBREF14 KNP used handwritten rules to segment each sentence into what we conventionally called clauses mostly consecutive text chunks each of which contained one main predicate KNP also identified the discourse relations of event pairs if explicit discourse connectives BIBREF4 such as because and in spite of were present We treated CauseReason and Condition in the original tagset BIBREF15 as Cause and Concession as Concession respectively Here is an example of event pair extraction Because I made a serious mistake I got fired From this sentence we extracted the event pair of I make a serious mistake and I get fired and tagged it with Cause We constructed our seed lexicon consisting of 15 positive words and 15 negative words as shown in Section SECREF27 From the corpus of about 100 million sentences we obtained 14 millions event pairs for AL 41 millions for CA and 6 millions for CO We randomly selected subsets of AL event pairs such that positive and negative latter events were equal in size We also sampled event pairs for each of CA and CO such that it was five times larger than AL The results are shown in Table TABREF16 We used the latest version of the ACP Corpus BIBREF12 for evaluation It was used for semisupervised training as well Extracted from Japanese websites using HTML layouts and linguistic patterns the dataset covered various genres For example the following two sentences were labeled positive and negative respectively The work is easy There is no parking lot Although the ACP corpus was originally constructed in the context of sentiment analysis we found that it could roughly be regarded as a collection of affective events We parsed each sentence and extracted the last clause in it The traindevtest split of the data is shown in Table TABREF19 The objective function for supervised training is where vi is the ith event Ri is the reference score of vi and Nrm ACP is the number of the events of the ACP Corpus To optimize the hyperparameters we used the dev set of the ACP Corpus For the evaluation we used the test set of the ACP Corpus The model output was classified as positive if px 0 and negative if px le 0 As for rm Encoder we compared two types of neural networks BiGRU and BERT GRU BIBREF16 is a recurrent neural network sequence encoder BiGRU reads an input sequence forward and backward and the output is the concatenation of the final forward and backward hidden states BERT BIBREF17 is a pretrained multilayer bidirectional Transformer BIBREF18 encoder Its output is the final hidden state corresponding to the special classification tag CLS For the details of rm Encoder see Sections SECREF30 We trained the model with the following four combinations of the datasets AL ALCACO two proposed models ACP supervised and ACPALCACO semisupervised The corresponding objective functions were mathcal Lrm AL mathcal Lrm AL mathcal Lrm CA mathcal Lrm CO mathcal Lrm ACP and mathcal Lrm ACP mathcal Lrm AL mathcal Lrm CA mathcal Lrm CO Table TABREF23 shows accuracy As the Random baseline suggests positive and negative labels were distributed evenly The RandomSeed baseline made use of the seed lexicon and output the corresponding label or the reverse of it for negation if the events predicate is in the seed lexicon We can see that the seed lexicon itself had practically no impact on prediction The models in the top block performed considerably better than the random baselines The performance gaps with their semisupervised counterparts shown in the middle block were less than 7 This demonstrates the effectiveness of discourse relationbased label propagation Comparing the model variants we obtained the highest score with the BiGRU encoder trained with the ALCACO dataset BERT was competitive but its performance went down if CA and CO were used in addition to AL We conjecture that BERT was more sensitive to noises found more frequently in CA and CO Contrary to our expectations supervised models ACP outperformed semisupervised models ACPALCACO This suggests that the training set of 06 million events is sufficiently large for training the models For comparison we trained the models with a subset 6000 events of the ACP dataset As the results shown in Table TABREF24 demonstrate our method is effective when labeled data are small The result of hyperparameter optimization for the BiGRU encoder was as follows As the CA and CO pairs were equal in size Table TABREF16 lambda rm CA and lambda rm CO were comparable values lambda rm CA was about onethird of lambda rm CO and this indicated that the CA pairs were noisier than the CO pairs A major type of CA pairs that violates our assumption was in the form of textit problemtextnegative causes textit solutiontextpositive there is a bad point I try to improve it The polarities of the two events were reversed in spite of the Cause relation and this lowered the value of lambda rm CA Some examples of model outputs are shown in Table TABREF26 The first two examples suggest that our model successfully learned negation without explicit supervision Similarly the next two examples differ only in voice but the model correctly recognized that they had opposite polarities The last two examples share the predicate drop and only the objects are different The second event lit drop ones shoulders is an idiom that expresses a disappointed feeling The examples demonstrate that our model correctly learned noncompositional expressions In this paper we proposed to use discourse relations to effectively propagate polarities of affective events from seeds Experiments show that even with a minimal amount of supervision the proposed method performed well Although event pairs linked by discourse analysis are shown to be useful they nevertheless contain noises Adding linguisticallymotivated filtering rules would help improve the performance We thank Nobuhiro Kaji for providing the ACP Corpus and Hirokazu Kiyomaru and Yudai Kishimoto for their help in extracting event pairs This work was partially supported by Yahoo Japan Corporation rejoice be glad be pleasant be happy be impressed be excited feel nostalgic like respect be relieved admire be calm be satisfied be healed and be refreshed get angry be sad be lonely be scared feel anxious be embarrassed hate feel down be bored feel hopeless have a hard time have trouble be depressed be worried and be sorry The dimension of the embedding layer was 256 The embedding layer was initialized with the word embeddings pretrained using the Web corpus The input sentences were segmented into words by the morphological analyzer Juman The vocabulary size was 100000 The number of hidden layers was 2 The dimension of hidden units was 256 The optimizer was Momentum SGD BIBREF21 The minibatch size was 1024 We ran 100 epochs and selected the snapshot that achieved the highest score for the dev set We used a Japanese BERT model pretrained with Japanese Wikipedia The input sentences were segmented into words by Juman and words were broken into subwords by applying BPE BIBREF20 The vocabulary size was 32000 The maximum length of an input sequence was 128 The number of hidden layers was 12 The dimension of hidden units was 768 The number of selfattention heads was 12 The optimizer was Adam BIBREF19 The minibatch size was 32 We ran 1 epoch </s>
<s>[INST] PO-EMO: Conceptualization, Annotation, and Modeling of Aesthetic Emotions in German and English Poetry [/INST] 11em 111em 1111em Thomas Haider13 Steffen Eger2 Evgeny Kim3 Roman Klinger3 Winfried Menninghaus1 1Department of Language and Literature Max Planck Institute for Empirical Aesthetics 2NLLG Department of Computer Science Technische Universitat Darmstadt 3Institut fr Maschinelle Sprachverarbeitung University of Stuttgart thomashaider wmaempgde egeraiphestudarmstadtde romanklinger evgenykimimsunistuttgartde Most approaches to emotion analysis regarding social media literature news and other domains focus exclusively on basic emotion categories as defined by Ekman or Plutchik However art such as literature enables engagement in a broader range of more complex and subtle emotions that have been shown to also include mixed emotional responses We consider emotions as they are elicited in the reader rather than what is expressed in the text or intended by the author Thus we conceptualize a set of aesthetic emotions that are predictive of aesthetic appreciation in the reader and allow the annotation of multiple labels per line to capture mixed emotions within context We evaluate this novel setting in an annotation experiment both with carefully trained experts and via crowdsourcing Our annotation with experts leads to an acceptable agreement of kappa 70 resulting in a consistent dataset for future large scale analysis Finally we conduct first emotion classification experiments based on BERT showing that identifying aesthetic emotions is challenging in our data with up to 52 F1micro on the German subset Data and resources are available at httpsgithubcomtnhaiderpoetryemotion Emotion Aesthetic Emotions Literature Poetry Annotation Corpora Emotion Recognition MultiLabel Emotions are central to human experience creativity and behavior Models of affect and emotion both in psychology and natural language processing commonly operate on predefined categories designated either by continuous scales of eg Valence Arousal and Dominance BIBREF0 or discrete emotion labels which can also vary in intensity Discrete sets of emotions often have been motivated by theories of basic emotions as proposed by Ekman1992Anger Fear Joy Disgust Surprise Sadnessand Plutchik1991 who added Trust and Anticipation These categories are likely to have evolved as they motivate behavior that is directly relevant for survival However art reception typically presupposes a situation of safety and therefore offers special opportunities to engage in a broader range of more complex and subtle emotions These differences between reallife and art contexts have not been considered in natural language processing work so far To emotionally move readers is considered a prime goal of literature since Latin antiquity BIBREF1 BIBREF2 BIBREF3 Deeply moved readers shed tears or get chills and goosebumps even in lab settings BIBREF4 In cases like these the emotional response actually implies an aesthetic evaluation narratives that have the capacity to move readers are evaluated as good and powerful texts for this very reason Similarly feelings of suspense experienced in narratives not only respond to the trajectory of the plots content but are also directly predictive of aesthetic liking or disliking Emotions that exhibit this dual capacity have been defined as aesthetic emotions BIBREF2 Contrary to the negativity bias of classical emotion catalogues emotion terms used for aesthetic evaluation purposes include far more positive than negative emotions At the same time many overall positive aesthetic emotions encompass negative or mixed emotional ingredients BIBREF2 eg feelings of suspense include both hopeful and fearful anticipations For these reasons we argue that the analysis of literature with a focus on poetry should rely on specifically selected emotion items rather than on the narrow range of basic emotions only Our selection is based on previous research on this issue in psychological studies on art reception and specifically on poetry For instance knoop2016mapping found that Beauty is a major factor in poetry reception We primarily adopt and adapt emotion terms that schindler2017measuring have identified as aesthetic emotions in their study on how to measure and categorize such particular affective states Further we consider the aspect that when selecting specific emotion labels the perspective of annotators plays a major role Whether emotions are elicited in the reader expressed in the text or intended by the author largely changes the permissible labels For example feelings of Disgust or Love might be intended or expressed in the text but the text might still fail to elicit corresponding feelings as these concepts presume a strong reaction in the reader Our focus here was on the actual emotional experience of the readers rather than on hypothetical intentions of authors We opted for this reader perspective based on previous research in NLP BIBREF5 BIBREF6 and work in empirical aesthetics BIBREF7 that specifically measured the reception of poetry Our final set of emotion labels consists of BeautyJoy Sadness Uneasiness Vitality Suspense AweSublime Humor Annoyance and Nostalgia In addition to selecting an adapted set of emotions the annotation of poetry brings further challenges one of which is the choice of the appropriate unit of annotation Previous work considers words BIBREF8 BIBREF9 sentences BIBREF10 BIBREF11 utterances BIBREF12 sentence triples BIBREF13 or paragraphs BIBREF14 as the units of annotation For poetry reasonable units follow the logical document structure of poems ie verse line stanza and owing to its relative shortness the complete text The more coarsegrained the unit the more difficult the annotation is likely to be but the more it may also enable the annotation of emotions in context We find that annotating finegrained units lines that are hierarchically ordered within a larger context stanza poem caters to the specific structure of poems where emotions are regularly mixed and are more interpretable within the whole poem Consequently we allow the mixing of emotions already at line level through multilabel annotation The remainder of this paper includes 1 a report of the annotation process that takes these challenges into consideration 2 a description of our annotated corpora and 3 an implementation of baseline models for the novel task of aesthetic emotion annotation in poetry In a first study the annotators work on the annotations in a closely supervised fashion carefully reading each verse stanza and poem In a second study the annotations are performed via crowdsourcing within relatively short time periods with annotators not seeing the entire poem while reading the stanza Using these two settings we aim at obtaining a better understanding of the advantages and disadvantages of an expert vs crowdsourcing setting in this novel annotation task Particularly we are interested in estimating the potential of a crowdsourcing environment for the task of selfperceived emotion annotation in poetry given time and cost overhead associated with inhouse annotation process that usually involve training and close supervision of the annotators We provide the final datasets of German and English language poems annotated with reader emotions on verse level at httpsgithubcomtnhaiderpoetryemotion Natural language understanding research on poetry has investigated stylistic variation BIBREF15 BIBREF16 BIBREF17 with a focus on broadly accepted formal features such as meter BIBREF18 BIBREF19 BIBREF20 and rhyme BIBREF21 BIBREF22 as well as enjambement BIBREF23 BIBREF24 and metaphor BIBREF25 BIBREF26 Recent work has also explored the relationship of poetry and prose mainly on a syntactic level BIBREF27 BIBREF28 Furthermore poetry also lends itself well to semantic change analysis BIBREF29 BIBREF30 as linguistic invention BIBREF31 BIBREF32 and succinctness BIBREF33 are at the core of poetic production Corpusbased analysis of emotions in poetry has been considered but there is no work on German and little on English kao2015computational analyze English poems with word associations from the Harvard Inquirer and LIWC within the categories positivenegative outlook positivenegative emotion and physpsych wellbeing houfrank2015analyzing examine the binary sentiment polarity of Chinese poems with a weighted personalized PageRank algorithm barros2013automatic followed a tagging approach with a thesaurus to annotate words that are similar to the words Joy Anger Fear and Sadness moreover translating these from English to Spanish With these word lists they distinguish the categories Love Songs to Lisi Satire and PhilosophicalMoralReligious in Quevedos poetry Similarly alsharif2013emotion classify unique Arabic emotional text forms based on word unigrams Mohanty2018 create a corpus of 788 poems in the Indian Odia language annotate it on text poem level with binary negative and positive sentiment and are able to distinguish these with moderate success Sreeja2019 construct a corpus of 736 Indian language poems and annotate the texts on Ekmans six categories Love Courage They achieve a Fleiss Kappa of 48 In contrast to our work these studies focus on basic emotions and binary sentiment polarity only rather than addressing aesthetic emotions Moreover they annotate on the level of complete poems instead of finegrained verse and stanzalevel Emotion corpora have been created for different tasks and with different annotation strategies with different units of analysis and different foci of emotion perspective reader writer text Examples include the ISEAR dataset BIBREF34 documentlevel emotion annotation in children stories BIBREF10 and news headlines BIBREF35 sentencelevel and finegrained emotion annotation in literature by Kim2018 phrase and wordlevel We refer the interested reader to an overview paper on existing corpora BIBREF36 We are only aware of a limited number of publications which look in more depth into the emotion perspective buechelhahn2017emobank report on an annotation study that focuses both on writers and readers emotions associated with English sentences The results show that the reader perspective yields better interannotator agreement Yang2009 also study the difference between writer and reader emotions but not with a modeling perspective The authors find that positive reader emotions tend to be linked to positive writer emotions in online blogs The task of emotion classification has been tackled before using rulebased and machine learning approaches Rulebased emotion classification typically relies on lexical resources of emotionally charged words BIBREF9 BIBREF37 BIBREF8 and offers a straightforward and transparent way to detect emotions in text In contrast to rulebased approaches current models for emotion classification are often based on neural networks and commonly use word embeddings as features Schuff2017 applied models from the classes of CNN BiLSTM and LSTM and compare them to linear classifiers SVM and MaxEnt where the BiLSTM shows best results with the most balanced precision and recall AbdulMageed2017 claim the highest F1 with gated recurrent unit networks BIBREF38 for Plutchiks emotion model More recently shared tasks on emotion analysis BIBREF39 BIBREF40 triggered a set of more advanced deep learning approaches including BERT BIBREF41 and other transfer learning methods BIBREF42 For our annotation and modeling studies we build on top of two poetry corpora in English and German which we refer to as POEMO This collection represents important contributions to the literary canon over the last 400 years We make this resource available in TEI P5 XML and an easytouse tab separated format Table TABREF9 shows a size overview of these data sets Figure FIGREF8 shows the distribution of our data over time via density plots Note that both corpora show a relative underrepresentation before the onset of the romantic period around 1750 The German corpus contains poems available from the website lyrikantikoerperchende ANTIK which provides a platform for students to upload essays about poems The data is available in the Hypertext Markup Language with clean line and stanza segmentation ANTIK also has extensive metadata including author names years of publication numbers of sentences poetic genres and literary periods that enable us to gauge the distribution of poems according to periods The 158 poems we consider 731 stanzas are dispersed over 51 authors and the New High German timeline 15751936 AD This data has been annotated besides emotions for meter rhythm and rhyme in other studies BIBREF22 BIBREF43 The English corpus contains 64 poems of popular English writers It was partly collected from Project Gutenberg with the GutenTag tool and in addition includes a number of hand selected poems from the modern period and represents a cross section of popular English poets We took care to include a number of female authors who would have been underrepresented in a uniform sample Time stamps in the corpus are organized by the birth year of the author as assigned in Project Gutenberg In the following we will explain how we compiled and annotated three data subsets namely 1 48 German poems with gold annotation These were originally annotated by three annotators The labels were then aggregated with majority voting and based on discussions among the annotators Finally they were curated to only include one gold annotation 2 The remaining 110 German poems that are used to compute the agreement in table TABREF20 and 3 64 English poems contain the raw annotation from two annotators We report the genesis of our annotation guidelines including the emotion classes With the intention to provide a language resource for the computational analysis of emotion in poetry we aimed at maximizing the consistency of our annotation while doing justice to the diversity of poetry We iteratively improved the guidelines and the annotation workflow by annotating in batches cleaning the class set and the compilation of a gold standard The final overall cost of producing this expert annotated dataset amounts to approximately 3500 The annotation process was initially conducted by three female university students majoring in linguistics andor literary studies which we refer to as our expert annotators We used the INCePTION platform for annotation BIBREF44 Starting with the German poems we annotated in batches of about 16 and later in some cases 32 poems After each batch we computed agreement statistics including heatmaps and provided this feedback to the annotators For the first three batches the three annotators produced a gold standard using a majority vote for each line Where this was inconclusive they developed an adjudicated annotation based on discussion Where necessary we encouraged the annotators to aim for more consistency as most of the frequent switching of emotions within a stanza could not be reconstructed or justified In poems emotions are regularly mixed already on line level and are more interpretable within the whole poem We therefore annotate lines hierarchically within the larger context of stanzas and the whole poem Hence we instruct the annotators to read a complete stanza or full poem and then annotate each line in the context of its stanza To reflect on the emotional complexity of poetry we allow a maximum of two labels per line while avoiding heavy label fluctuations by encouraging annotators to reflect on their feelings to avoid empty annotations Rather they were advised to use fewer labels and more consistent annotation This additional constraint is necessary to avoid wild nonreconstructable or nonjustified annotations All subsequent batches all except the first three were only annotated by two out of the three initial annotators coincidentally those two who had the lowest initial agreement with each other We asked these two experts to use the generated gold standard 48 poems majority votes of 3 annotators plus manual curation as a reference if in doubt annotate according to the gold standard This eliminated some systematic differences between them and markedly improved the agreement levels roughly from 0305 Cohens kappa in the first three batches to around 0608 kappa for all subsequent batches This annotation procedure relaxes the reader perspective as we encourage annotators if in doubt to annotate how they think the other annotators would annotate However we found that this formulation improves the usability of the data and leads to a more consistent annotation We opt for measuring the reader perspective rather than the text surface or authors intent To closer define and support conceptualizing our labels we use particular items as they are used in psychological selfevaluations These items consist of adjectives verbs or short phrases We build on top of schindler2017measuring who proposed 43 items that were then grouped by a factor analysis based on selfevaluations of participants The resulting factors are shown in Table TABREF17 We attempt to cover all identified factors and supplement with basic emotions BIBREF46 BIBREF47 where possible We started with a larger set of labels to then delete and substitute tone down labels during the initial annotation process to avoid infrequent classes and inconsistencies Further we conflate labels if they show considerable confusion with each other These iterative improvements particularly affected Confusion Boredom and Other that were very infrequently annotated and had little agreement among annotators kappa 2 For German we also removed Nostalgia kappa 218 after gold standard creation but after consideration added it back for English then achieving agreement Nostalgia is still available in the gold standard then with a second label BeautyJoy or Sadness to keep consistency However Confusion Boredom and Other are not available in any subcorpus Our final set consists of nine classes ie in order of frequency BeautyJoy Sadness Uneasiness Vitality Suspense AweSublime Humor Annoyance and Nostalgia In the following we describe the labels and give further details on the aggregation process Annoyance annoys meangers mefelt frustrated Annoyance implies feeling annoyed frustrated or even angry while reading the linestanza We include the class Anger here as this was found to be too strong in intensity AweSublime found it overwhelmingsense of greatness AweSublime implies being overwhelmed by the linestanza ie if one gets the impression of facing something sublime or if the linestanza inspires one with awe or that the expression itself is sublime Such emotions are often associated with subjects like god death life truth etc The term Sublime originated with kant2000critique as one of the first aesthetic emotion terms Awe is a more common English term BeautyJoy found it beautifulpleasingmakes me happyjoyful kant2000critique already spoke of a feeling of beauty and it should be noted that it is not a merely pleasing emotion Therefore in our pilot annotations Beauty and Joy were separate labels However schindler2017measuring found that items for Beauty and Joy load into the same factors Furthermore our pilot annotations revealed while Beauty is the more dominant and frequent feeling both labels regularly accompany each other and they often get confused across annotators Therefore we add Joy to form an inclusive label BeautyJoy that increases annotation consistency Humor found it funnyamusing Implies feeling amused by the linestanza or if it makes one laugh Nostalgia makes me nostalgic Nostalgia is defined as a sentimental longing for things persons or situations in the past It often carries both positive and negative feelings However since this label is quite infrequent and not available in all subsets of the data we annotated it with an additional BeautyJoy or Sadness label to ensure annotation consistency Sadness makes me sadtouches me If the linestanza makes one feel sad It also includes a more general being touched moved Suspense found it grippingsparked my interest Choose Suspense if the linestanza keeps one in suspense if the linestanza excites one or triggers ones curiosity We further removed Anticipation from SuspenseAnticipation as Anticipation appeared to us as being a more cognitive prediction whereas Suspense is a far more straightforward emotion item Uneasiness found it uglyunsettlingdisturbing frighteningdistasteful This label covers situations when one feels discomfort about the linestanza if the linestanza feels distastefulugly unsettlingdisturbing or frightens one The labels Ugliness and Disgust were conflated into Uneasiness as both are seldom felt in poetry being inadequatetoo stronghigh in arousal and typically lead to Uneasiness Vitality found it invigoratingspurs me oninspires me This label is meant for a linestanza that has an inciting encouraging effect if the linestanza conveys a feeling of movement energy and vitality which animates to action Similar terms are Activation and Stimulation Table TABREF20 shows the Cohens kappa agreement scores among our two expert annotators for each emotion category e as follows We assign each instance a line in a poem a binary label indicating whether or not the annotator has annotated the emotion category e in question From this we obtain vectors vie for annotators i01 where each entry of vie holds the binary value for the corresponding line We then apply the kappa statistics to the two binary vectors vie Additionally to averaged kappa we report microF1 values in Table TABREF21 between the multilabel annotations of both expert annotators as well as the microF1 score of a random baseline as well as of the majority emotion baseline which labels each line as BeautyJoy We find that Cohen kappa agreement ranges from 84 for Uneasiness in the English data 81 for Humor and Nostalgia down to German Suspense 65 AweSublime 61 and Vitality for both languages 50 English 63 German Both annotators have a similar emotion frequency profile where the ranking is almost identical especially for German However for English Annotator 2 annotates more Vitality than Uneasiness Figure FIGREF18 shows the confusion matrices of labels between annotators as heatmaps Notably BeautyJoy and Sadness are confused across annotators more often than other labels This is topical for poetry and therefore not surprising One might argue that the beauty of beings and situations is only beautiful because it is not enduring and therefore not to divorce from the sadness of the vanishing of beauty BIBREF48 We also find considerable confusion of Sadness with AweSublime and Vitality while the latter is also regularly confused with BeautyJoy Furthermore as shown in Figure FIGREF23 we find that no single poem aggregates to more than six emotion labels while no stanza aggregates to more than four emotion labels However most lines and stanzas prefer one or two labels German poems seem more emotionally diverse where more poems have three labels than two labels while the majority of English poems have only two labels This is however attributable to the generally shorter English texts After concluding the expert annotation we performed a focused crowdsourcing experiment based on the final label set and items as they are listed in Table TABREF27 and Section SECREF19 With this experiment we aim to understand whether it is possible to collect reliable judgements for aesthetic perception of poetry from a crowdsourcing platform A second goal is to see whether we can replicate the expensive expert annotations with less costly crowd annotations We opted for a maximally simple annotation environment where we asked participants to annotate English 4line stanzas with selfperceived reader emotions We choose English due to the higher availability of English language annotators on crowdsourcing platforms Each annotator rates each stanza independently of surrounding context For consistency and to simplify the task for the annotators we opt for a tradeoff between completeness and granularity of the annotation Specifically we subselect stanzas composed of four verses from the corpus of 64 hand selected English poems The resulting selection of 59 stanzas is uploaded to Figure Eight for annotation The annotators are asked to answer the following questions for each instance Question 1 singlechoice Read the following stanza and decide for yourself which emotions it evokes Question 2 multiplechoice Which additional emotions does the stanza evoke The answers to both questions correspond to the emotion labels we defined to use in our annotation as described in Section SECREF19 We add an additional answer choice None to Question 2 to allow annotators to say that a stanza does not evoke any additional emotions Each instance is annotated by ten people We restrict the task geographically to the United Kingdom and Ireland and set the internal parameters on Figure Eight to only include the highest quality annotators to join the task We pay 009 per instance The final cost of the crowdsourcing experiment is 74 In the following we determine the best aggregation strategy regarding the 10 annotators with bootstrap resampling For instance one could assign the label of a specific emotion to an instance if just one annotators picks it or one could assign the label only if all annotators agree on this emotion To evaluate this we repeatedly pick two sets of 5 annotators each out of the 10 annotators for each of the 59 stanzas 1000 times overall ie 1000times 59 times bootstrap resampling For each of these repetitions we compare the agreement of these two groups of 5 annotators Each group gets assigned with an adjudicated emotion which is accepted if at least one annotator picks it at least two annotators pick it etc up to all five pick it We show the results in Table TABREF27 The kappa scores show the average agreement between the two groups of five annotators when the adjudicated class is picked based on the particular threshold of annotators with the same label choice We see that some emotions tend to have higher agreement scores than others namely Annoyance 66 Sadness up to 52 and AweSublime BeautyJoy Humor all 46 The maximum agreement is reached mostly with a threshold of 2 4 times or 3 3 times We further show in the same table the average numbers of labels from each strategy Obviously a lower threshold leads to higher numbers corresponding to a disjunction of annotations for each emotion The drop in label counts is comparably drastic with on average 18 labels per class Overall the best average kappa agreement 32 is less than half of what we saw for the expert annotators roughly 70 Crowds especially disagree on many more intricate emotion labels Uneasiness Vitality Nostalgia Suspense We visualize how often two emotions are used to label an instance in a confusion table in Figure FIGREF18 Sadness is used most often to annotate a stanza and it is often confused with Suspense Uneasiness and Nostalgia Further BeautyJoy partially overlaps with AweSublime Nostalgia and Sadness On average each crowd annotator uses two emotion labels per stanza 56 of cases only in 36 of the cases the annotators use one label and in 6 and 1 of the cases three and four labels respectively This contrasts with the expert annotators who use one label in about 70 of the cases and two labels in 30 of the cases for the same 59 fourliners Concerning frequency distribution for emotion labels both experts and crowds name Sadness and BeautyJoy as the most frequent emotions for the best threshold of 3 and Nostalgia as one of the least frequent emotions The Spearman rank correlation between experts and crowds is about 055 with respect to the label frequency distribution indicating that crowds could replace experts to a moderate degree when it comes to extracting eg emotion distributions for an author or time period Now we further compare crowds and experts in terms of whether crowds could replicate expert annotations also on a finer stanza level rather than only on a distributional level To gauge the quality of the crowd annotations in comparison with our experts we calculate agreement on the emotions between experts and an increasing group size from the crowd For each stanza instance s we pick N crowd workers where Nin lbrace 46810rbrace then pick their majority emotion for s and additionally pick their second ranked majority emotion if at least fracN21 workers have chosen it For the experts we aggregate their emotion labels on stanza level then perform the same strategy for selection of emotion labels Thus for s both crowds and experts have 1 or 2 emotions For each emotion we then compute Cohens kappa as before Note that compared to our previous experiments in Section SECREF26 with a threshold each stanza now receives an emotion annotation exactly one or two emotion labels both by the experts and the crowdworkers In Figure FIGREF30 we plot agreement between experts and crowds on stanza level as we vary the number N of crowd workers involved On average there is roughly a steady linear increase in agreement as N grows which may indicate that N20 or N30 would still lead to better agreement Concerning individual emotions Nostalgia is the emotion with the least agreement as opposed to Sadness in our sample of 59 fourliners the agreement for this emotion grows from 47 kappa with N4 to 65 kappa with N10 Sadness is also the most frequent emotion both according to experts and crowds Other emotions for which a reasonable agreement is achieved are Annoyance AweSublime BeautyJoy Humor kappa 02 Emotions with little agreement are Vitality Uneasiness Suspense Nostalgia kappa 02 By and large we note from Figure FIGREF18 that expert annotation is more restrictive with experts agreeing more often on particular emotion labels seen in the darker diagonal The results of the crowdsourcing experiment on the other hand are a mixed bag as evidenced by a much sparser distribution of emotion labels However we note that these differences can be caused by 1 the disparate training procedure for the experts and crowds and 2 the lack of opportunities for close supervision and ongoing training of the crowds as opposed to the inhouse expert annotators In general however we find that substituting experts with crowds is possible to a certain degree Even though the crowds labels look inconsistent at first view there appears to be a good signal in their aggregated annotations helping to approximate expert annotations to a certain degree The average kappa agreement with the experts we get from N10 crowd workers 024 is still considerably below the agreement among the experts 070 To estimate the difficulty of automatic classification of our data set we perform multilabel document classification of stanzas with BERT BIBREF41 For this experiment we aggregate all labels for a stanza and sort them by frequency both for the gold standard and the raw expert annotations As can be seen in Figure FIGREF23 a stanza bears a minimum of one and a maximum of four emotions Unfortunately the label Nostalgia is only available 16 times in the German data the gold standard as a second label as discussed in Section SECREF19 None of our models was able to learn this label for German Therefore we omit it leaving us with eight proper labels We use the code and the pretrained BERT models of Farm provided by deepsetai We test the multilingualuncased model Multiling the germanbasecased model Base the germandbmdzuncased model Dbmdz and we tune the Base model on 80k stanzas of the German Poetry Corpus DLK BIBREF30 for 2 epochs both on token masked words and sequence next line prediction Basetextsc Tuned We split the randomized German dataset so that each label is at least 10 times in the validation set 63 instances 113 labels and at least 10 times in the test set 56 instances 108 labels and leave the rest for training 617 instances 946 labels We train BERT for 10 epochs with a batch size of 8 optimize with entropy loss and report F1micro on the test set See Table TABREF36 for the results We find that the multilingual model cannot handle infrequent categories ie AweSublime Suspense and Humor However increasing the dataset with English data improves the results suggesting that the classification would largely benefit from more annotated data The best model overall is DBMDZ 520 showing a balanced response on both validation and test set See Table TABREF37 for a breakdown of all emotions as predicted by the this model Precision is mostly higher than recall The labels AweSublime Suspense and Humor are harder to predict than the other labels The BASE and BASEtextsc TUNED models perform slightly worse than DBMDZ The effect of tuning of the BASE model is questionable probably because of the restricted vocabulary 30k We found that tuning on poetry does not show obvious improvements Lastly we find that models that were trained on lines instead of stanzas do not achieve the same F1 42 for the German models In this paper we presented a dataset of German and English poetry annotated with reader response to reading poetry We argued that basic emotions as proposed by psychologists such as Ekman and Plutchik that are often used in emotion analysis from text are of little use for the annotation of poetry reception We instead conceptualized aesthetic emotion labels and showed that a closely supervised annotation task results in substantial agreementin terms of kappa scoreon the final dataset The task of collecting readerperceived emotion response to poetry in a crowdsourcing setting is not straightforward In contrast to expert annotators who were closely supervised and reflected upon the task the annotators on crowdsourcing platforms are difficult to control and may lack necessary background knowledge to perform the task at hand However using a larger number of crowd annotators may lead to finding an aggregation strategy with a better tradeoff between quality and quantity of adjudicated labels For future work we thus propose to repeat the experiment with larger number of crowdworkers and develop an improved training strategy that would suit the crowdsourcing environment The dataset presented in this paper can be of use for different application scenarios including multilabel emotion classification styleconditioned poetry generation investigating the influence of rhythmprosodic features on emotion or analysis of authors genres and diachronic variation eg how emotions are represented differently in certain periods Further though our modeling experiments are still rudimentary we propose that this data set can be used to investigate the intrapoem relations either through multitask learning BIBREF49 andor with the help of hierarchical sequence classification approaches A special thanks goes to Gesine Fuhrmann who created the guidelines and tirelessly documented the annotation progress Also thanks to Annika Palm and Debby Trzeciak who annotated and gave lively feedback For help with the conceptualization of labels we thank Ines Schindler This research has been partially conducted within the CRETA center httpwwwcretaunistuttgartde which is funded by the German Ministry for Education and Research BMBF and partially funded by the German Research Council DFG projects SEAT Structured MultiDomain Emotion Analysis from Text KL 286911 This work has also been supported by the German Research Foundation as part of the Research Training Group Adaptive Preparation of Information from Heterogeneous Sources AIPHES at the Technische Universitt Darmstadt under grant No GRK 19941 We illustrate two examples of our German gold standard annotation a poem each by Friedrich Hlderlin and Georg Trakl and an English poem by Walt Whitman Hlderlins text stands out because the mood changes starkly from the first stanza to the second from BeautyJoy to Sadness Trakls text is a bit more complex with bits of Nostalgia and most importantly a mixture of Uneasiness with AweSublime Whitmans poem is an example of Vitality and its mixing with Sadness The English annotation was unified by us for space constraints For the full annotation please see httpsgithubcomtnhaiderpoetryemotion </s>
<s>[INST] Community Identity and User Engagement in a Multi-Community Landscape [/INST] If each city is like a game of chess the day when I have learned the rules I shall finally possess my empire even if I shall never succeed in knowing all the cities it contains Italo Calvino Invisible Cities A communitys identitydefined through the common interests and shared experiences of its usersshapes various facets of the social dynamics within it BIBREF0 BIBREF1 BIBREF2 Numerous instances of this interplay between a communitys identity and social dynamics have been extensively studied in the context of individual online communities BIBREF3 BIBREF4 BIBREF5 However the sheer variety of online platforms complicates the task of generalizing insights beyond these isolated singlecommunity glimpses A new way to reason about the variation across multiple communities is needed in order to systematically characterize the relationship between properties of a community and the dynamics taking place within One especially important component of community dynamics is user engagement We can aim to understand why users join certain communities BIBREF6 what factors influence user retention BIBREF7 and how users react to innovation BIBREF5 While striking patterns of user engagement have been uncovered in prior case studies of individual communities BIBREF8 BIBREF9 BIBREF10 BIBREF11 BIBREF12 we do not know whether these observations hold beyond these cases or when we can draw analogies between different communities Are there certain types of communities where we can expect similar or contrasting engagement patterns To address such questions quantitatively we need to provide structure to the diverse and complex space of online communities Organizing the multicommunity landscape would allow us to both characterize individual points within this space and reason about systematic variations in patterns of user engagement across the space Present work Structuring the multicommunity space In order to systematically understand the relationship between community identityand user engagement we introduce a quantitative typology of online communities Our typology is based on two key aspects of community identity how distinctiveor nichea communitys interests are relative to other communities and how dynamicor volatilethese interests are over time These axes aim to capture the salience of a communitys identity and dynamics of its temporal evolution Our main insight in implementing this typology automatically and at scale is that the language used within a community can simultaneously capture how distinctive and dynamic its interests are This languagebased approach draws on a wealth of literature characterizing linguistic variation in online communities and its relationship to community and user identity BIBREF16 BIBREF5 BIBREF17 BIBREF18 BIBREF19 Basing our typology on language is also convenient since it renders our framework immediately applicable to a wide variety of online communities where communication is primarily recorded in a textual format Using our framework we map almost 300 Reddit communities onto the landscape defined by the two axes of our typology Section SECREF2 We find that this mapping induces conceptually sound categorizations that effectively capture key aspects of communitylevel social dynamics In particular we quantitatively validate the effectiveness of our mapping by showing that our twodimensional typology encodes signals that are predictive of communitylevel rates of user retention complementing strong activitybased features Engagement and community identity We apply our framework to understand how two important aspects of user engagement in a communitythe communitys propensity to retain its users Section SECREF3 and its permeability to new members Section SECREF4 vary according to the type of collective identity it fosters We find that communities that are characterized by specialized constantlyupdating content have higher user retention rates but also exhibit larger linguistic gaps that separate newcomers from established members More closely examining factors that could contribute to this linguistic gap we find that especially within distinctive communities established users have an increased propensity to engage with the communitys specialized content compared to newcomers Section SECREF5 Interestingly while established members of distinctive communities more avidly respond to temporal updates than newcomers in more generic communities it is the outsiders who engage more with volatile content perhaps suggesting that such content may serve as an entrypoint to the community but not necessarily a reason to stay Such insights into the relation between collective identity and user engagement can be informative to community maintainers seeking to better understand growth patterns within their online communities More generally our methodology stands as an example of how sociological questions can be addressed in a multicommunity setting In performing our analyses across a rich variety of communities we reveal both the diversity of phenomena that can occur as well as the systematic nature of this diversity A communitys identity derives from its members common interests and shared experiences BIBREF15 BIBREF20 In this work we structure the multicommunity landscape along these two key dimensions of community identity how distinctive a communitys interests are and how dynamic the community is over time We now proceed to outline our quantitative typology which maps communities along these two dimensions We start by providing an intuition through inspecting a few example communities We then introduce a generalizable languagebased methodology and use it to map a large set of Reddit communities onto the landscape defined by our typology of community identity In order to illustrate the diversity within the multicommunity space and to provide an intuition for the underlying structure captured by the proposed typology we first examine a few example communities and draw attention to some key social dynamics that occur within them We consider four communities from Reddit in Seahawks fans of the Seahawks football team gather to discuss games and players in BabyBumps expecting mothers trade advice and updates on their pregnancy Cooking consists of recipe ideas and general discussion about cooking while in pics users share various images of random things like eels and hornets We note that these communities are topically contrasting and foster fairly disjoint user bases Additionally these communities exhibit varied patterns of user engagement While Seahawks maintains a devoted set of users from month to month pics is dominated by transient users who post a few times and then depart Discussions within these communities also span varied sets of interests Some of these interests are more specific to the community than others risotto for example is seldom a discussion point beyond Cooking Additionally some interests consistently recur while others are specific to a particular time kitchens are a consistent focus point for cooking but mint is only in season during spring Coupling specificity and consistency we find interests such as easter which isnt particularly specific to BabyBumps but gains prominence in that community around Easter see Figure FIGREF3 A for further examples These specific interests provide a window into the nature of the communities interests as a whole and by extension their community identities Overall discussions in Cooking focus on topics which are highly distinctive and consistently recur like risotto In contrast discussions in Seahawks are highly dynamic rapidly shifting over time as new games occur and players are traded in and out In the remainder of this section we formally introduce a methodology for mapping communities in this space defined by their distinctiveness and dynamicity examples in Figure FIGREF3 B Our approach follows the intuition that a distinctive community will use language that is particularly specific or unique to that community Similarly a dynamic community will use volatile language that rapidly changes across successive windows of time To capture this intuition automatically we start by defining wordlevel measures of specificity and volatility We then extend these wordlevel primitives to characterize entire comments and the community itself Our characterizations of words in a community are motivated by methodology from prior literature that compares the frequency of a word in a particular setting to its frequency in some background distribution in order to identify instances of linguistic variation BIBREF21 BIBREF19 Our particular framework makes this comparison by way of pointwise mutual information PMI In the following we use INLINEFORM0 to denote one community within a set INLINEFORM1 of communities and INLINEFORM2 to denote one time period within the entire history INLINEFORM3 of INLINEFORM4 We account for temporal as well as intercommunity variation by computing wordlevel measures for each time period of each communitys history INLINEFORM5 Given a word INLINEFORM6 used within a particular community INLINEFORM7 at time INLINEFORM8 we define two wordlevel measures Specificity We quantify the specificity INLINEFORM0 of INLINEFORM1 to INLINEFORM2 by calculating the PMI of INLINEFORM3 and INLINEFORM4 relative to INLINEFORM5 INLINEFORM6 where INLINEFORM0 is INLINEFORM1 s frequency in INLINEFORM2 INLINEFORM3 is specific to INLINEFORM4 if it occurs more frequently in INLINEFORM5 than in the entire set INLINEFORM6 hence distinguishing this community from the rest A word INLINEFORM7 whose occurrence is decoupled from INLINEFORM8 and thus has INLINEFORM9 close to 0 is said to be generic We compute values of INLINEFORM0 for each time period INLINEFORM1 in INLINEFORM2 in the above description we drop the timebased subscripts for clarity Volatility We quantify the volatility INLINEFORM0 of INLINEFORM1 to INLINEFORM2 as the PMI of INLINEFORM3 and INLINEFORM4 relative to INLINEFORM5 the entire history of INLINEFORM6 INLINEFORM7 A word INLINEFORM0 is volatile at time INLINEFORM1 in INLINEFORM2 if it occurs more frequently at INLINEFORM3 than in the entire history INLINEFORM4 behaving as a fad within a small window of time A word that occurs with similar frequency across time and hence has INLINEFORM5 close to 0 is said to be stable Extending to utterances Using our wordlevel primitives we define the specificity of an utterance INLINEFORM0 in INLINEFORM1 INLINEFORM2 as the average specificity of each word in the utterance The volatility of utterances is defined analogously Having described these wordlevel measures we now proceed to establish the primary axes of our typology Distinctiveness A community with a very distinctive identity will tend to have distinctive interests expressed through specialized language Formally we define the distinctiveness of a community INLINEFORM0 as the average specificity of all utterances in INLINEFORM1 We refer to a community with a less distinctive identity as being generic Dynamicity A highly dynamic community constantly shifts interests from one time window to another and these temporal variations are reflected in its use of volatile language Formally we define the dynamicity of a community INLINEFORM0 as the average volatility of all utterances in INLINEFORM1 We refer to a community whose language is relatively consistent throughout time as being stable In our subsequent analyses we focus mostly on examing the average distinctiveness and dynamicity of a community over time denoted INLINEFORM0 and INLINEFORM1 We now explain how our typology can be applied to the particular setting of Reddit and describe the overall behaviour of our linguistic axes in this context Dataset description Reddit is a popular website where users form and participate in discussionbased communities called subreddits Within these communities users post contentsuch as images URLs or questionswhich often spark vibrant lengthy discussions in threadbased comment sections The website contains many highly active subreddits with thousands of active subscribers These communities span an extremely rich variety of topical interests as represented by the examples described earlier They also vary along a rich multitude of structural dimensions such as the number of users the amount of conversation and social interaction and the social norms determining which types of content become popular The diversity and scope of Reddits multicommunity ecosystem make it an ideal landscape in which to closely examine the relation between varying community identities and social dynamics Our full dataset consists of all subreddits on Reddit from January 2013 to December 2014 for which there are at least 500 words in the vocabulary used to estimate our measures in at least 4 months of the subreddits history We compute our measures over the comments written by users in a community in time windows of months for each sufficiently active month and manually remove communities where the bulk of the contributions are in a foreign language This results in 283 communities INLINEFORM0 for a total of 4872 communitymonths INLINEFORM1 Estimating linguistic measures We estimate word frequencies INLINEFORM0 and by extension each downstream measure in a carefully controlled manner in order to ensure we capture robust and meaningful linguistic behaviour First we only consider toplevel comments which are initial responses to a post as the content of lowerlevel responses might reflect conventions of dialogue more than a communitys highlevel interests Next in order to prevent a few highly active users from dominating our frequency estimates we count each unique word once per user ignoring successive uses of the same word by the same user This ensures that our wordlevel characterizations are not skewed by a small subset of highly active contributors In our subsequent analyses we will only look at these measures computed over the nouns used in comments In principle our framework can be applied to any choice of vocabulary However in the case of Reddit using nouns provides a convenient degree of interpretability We can easily understand the implication of a community preferentially mentioning a noun such as gamer or feminist but interpreting the overuse of verbs or function words such as take or of is less straightforward Additionally in focusing on nouns we adopt the view emphasized in modern third wave accounts of sociolinguistic variation that stylistic variation is inseparable from topical content BIBREF23 In the case of online communities the choice of what people choose to talk about serves as a primary signal of social identity That said a typology based on more purely stylistic differences is an interesting avenue for future work Accounting for rare words One complication when using measures such as PMI which are based off of ratios of frequencies is that estimates for very infrequent words could be overemphasized BIBREF24 Words that only appear a few times in a community tend to score at the extreme ends of our measures eg as highly specific or highly generic obfuscating the impact of more frequent words in the community To address this issue we discard the long tail of infrequent words in our analyses using only the top 5th percentile of words by frequency within each INLINEFORM0 to score comments and communities Typology output on Reddit The distribution of INLINEFORM0 and INLINEFORM1 across Reddit communities is shown in Figure FIGREF3 B along with examples of communities at the extremes of our typology We find that interpretable groupings of communities emerge at various points within our axes For instance highly distinctive and dynamic communities tend to focus on rapidlyupdating interests like sports teams and games while generic and consistent communities tend to be large linksharing hubs where users generally post content with no clear dominating themes More examples of communities at the extremes of our typology are shown in Table TABREF9 We note that these groupings capture abstract properties of a communitys content that go beyond its topic For instance our typology relates topically contrasting communities such as yugioh which is about a popular trading card game and Seahawks through the shared trait that their content is particularly distinctive Additionally the axes can clarify differences between topically similar communities while startrek and thewalkingdead both focus on TV shows startrek is less dynamic than the median community while thewalkingdead is among the most dynamic communities as the show was still airing during the years considered We have seen that our typology produces qualitatively satisfying groupings of communities according to the nature of their collective identity This section shows that there is an informative and highly predictive relationship between a communitys position in this typology and its user engagement patterns We find that communities with distinctive and dynamic identities have higher rates of user engagement and further show that a communitys position in our identitybased landscape holds important predictive information that is complementary to a strong activity baseline In particular user retention is one of the most crucial aspects of engagement and is critical to community maintenance BIBREF2 We quantify how successful communities are at retaining users in terms of both short and longterm commitment Our results indicate that rates of user retention vary drastically yet systematically according to how distinctive and dynamic a community is Figure FIGREF3 We find a strong explanatory relationship between the temporal consistency of a communitys identity and rates of user engagement dynamic communities that continually update and renew their discussion content tend to have far higher rates of user engagement The relationship between distinctiveness and engagement is less universal but still highly informative niche communities tend to engender strong focused interest from users at one particular point in time though this does not necessarily translate into longterm retention We find that dynamic communities such as Seahawks or starcraft have substantially higher rates of monthly user retention than more stable communities Spearmans INLINEFORM0 070 INLINEFORM1 0001 computed with community points averaged over months Figure FIGREF11 A left Similarly more distinctive communities like Cooking and Naruto exhibit moderately higher monthly retention rates than more generic communities Spearmans INLINEFORM2 033 INLINEFORM3 0001 Figure FIGREF11 A right Monthly retention is formally defined as the proportion of users who contribute in month INLINEFORM0 and then return to contribute again in month INLINEFORM1 Each monthly datapoint is treated as unique and the trends in Figure FIGREF11 show 95 bootstrapped confidence intervals clusterresampled at the level of subreddit BIBREF25 to account for differences in the number of months each subreddit contributes to the data Importantly we find that in the task of predicting communitylevel user retention our identitybased typology holds additional predictive value on top of strong baseline features based on communitysize contributing users and activity levels mean contributions per user which are commonly used for churn prediction BIBREF7 We compared outofsample predictive performance via leaveonecommunityout cross validation using random forest regressors with ensembles of size 100 and otherwise default hyperparameters BIBREF26 A model predicting average monthly retention based on a communitys average distinctiveness and dynamicity achieves an average mean squared error INLINEFORM0 of INLINEFORM1 and INLINEFORM2 while an analogous model predicting based on a communitys size and average activity level both logtransformed achieves INLINEFORM4 and INLINEFORM5 The difference between the two models is not statistically significant INLINEFORM6 Wilcoxon signedrank test However combining features from both models results in a large and statistically significant improvement over each independent model INLINEFORM7 INLINEFORM8 INLINEFORM9 Bonferronicorrected pairwise Wilcoxon tests These results indicate that our typology can explain variance in communitylevel retention rates and provides information beyond what is present in standard activitybased features As with monthly retention we find a strong positive relationship between a communitys dynamicity and the average number of months that a user will stay in that community Spearmans INLINEFORM0 041 INLINEFORM1 0001 computed over all community points Figure FIGREF11 B left This verifies that the shortterm trend observed for monthly retention translates into longerterm engagement and suggests that longterm user retention might be strongly driven by the extent to which a community continually provides novel content Interestingly there is no significant relationship between distinctiveness and longterm engagement Spearmans INLINEFORM2 003 INLINEFORM3 077 Figure FIGREF11 B right Thus while highly distinctive communities like RandomActsOfMakeup may generate focused commitment from users over a short period of time such communities are unlikely to retain longterm users unless they also have sufficiently dynamic content To measure user tenures we focused on one slice of data May 2013 and measured how many months a user spends in each community on averagethe average number of months between a users first and last comment in each community We have activity data up until May 2015 so the maximum tenure is 24 months in this setup which is exceptionally long relative to the average community member throughout our entire data less than INLINEFORM0 of users have tenures of more than 24 months in any community The previous section shows that there is a strong connection between the nature of a communitys identity and its basic user engagement patterns In this section we probe the relationship between a communitys identity and how permeable or accessible it is to outsiders We measure this phenomenon using what we call the acculturation gap which compares the extent to which engaged vs nonengaged users employ communityspecific language While previous work has found this gap to be large and predictive of future user engagement in two beerreview communities BIBREF5 we find that the size of the acculturation gap depends strongly on the nature of a communitys identity with the gap being most pronounced in stable highly distinctive communities Figure FIGREF13 This finding has important implications for our understanding of online communities Though many works have analyzed the dynamics of linguistic belonging in online communities BIBREF16 BIBREF28 BIBREF5 BIBREF17 our results suggest that the process of linguistically fitting in is highly contingent on the nature of a communitys identity At one extreme in generic communities like pics or worldnews there is no distinctive linguistic identity for users to adopt To measure the acculturation gap for a community we follow DanescuNiculescuMizil et al danescuniculescumizilno2013 and build snapshot language models SLMs for each community which capture the linguistic state of a community at one point of time Using these language models we can capture how linguistically close a particular utterance is to the community by measuring the crossentropy of this utterance relative to the SLM DISPLAYFORM0 where INLINEFORM0 is the probability assigned to bigram INLINEFORM1 from comment INLINEFORM2 in communitymonth INLINEFORM3 We build the SLMs by randomly sampling 200 active usersdefined as users with at least 5 comments in the respective community and month For each of these 200 active users we select 5 random 10word spans from 5 unique comments To ensure robustness and maximize data efficiency we construct 100 SLMs for each communitymonth pair that has enough data bootstrapresampling from the set of active users We compute a basic measure of the acculturation gap for a communitymonth INLINEFORM0 as the relative difference of the crossentropy of comments by users active in INLINEFORM1 with that of singleton comments by outsidersie users who only ever commented once in INLINEFORM2 but who are still active in Reddit in general DISPLAYFORM0 INLINEFORM0 denotes the distribution over singleton comments INLINEFORM1 denotes the distribution over comments from users active in INLINEFORM2 and INLINEFORM3 the expected values of the crossentropy over these respective distributions For each bootstrapsampled SLM we compute the crossentropy of 50 comments by active users 10 comments from 5 randomly sampled active users who were not used to construct the SLM and 50 comments from randomlysampled outsiders Figure FIGREF13 A shows that the acculturation gap varies substantially with how distinctive and dynamic a community is Highly distinctive communities have far higher acculturation gaps while dynamicity exhibits a nonlinear relationship relatively stable communities have a higher linguistic entry barrier as do very dynamic ones Thus in communities like IAmA a general QA forum that are very generic with content that is highly but not extremely dynamic outsiders are at no disadvantage in matching the communitys language In contrast the acculturation gap is large in stable distinctive communities like Cooking that have consistent communityspecific language The gap is also large in extremely dynamic communities like Seahawks which perhaps require more attention or interest on the part of active users to keep uptodate with recent trends in content These results show that phenomena like the acculturation gap which were previously observed in individual communities BIBREF28 BIBREF5 cannot be easily generalized to a larger heterogeneous set of communities At the same time we see that structuring the space of possible communities enables us to observe systematic patterns in how such phenomena vary Through the acculturation gap we have shown that communities exhibit large yet systematic variations in their permeability to outsiders We now turn to understanding the divide in commenting behaviour between outsiders and active community members at a finer granularity by focusing on two particular ways in which such gaps might manifest among users through different levels of engagement with specific content and with temporally volatile content Echoing previous results we find that community type mediates the extent and nature of the divide in content affinity While in distinctive communities active members have a higher affinity for both communityspecific content and for highly volatile content the opposite is true for generic communities where it is the outsiders who engage more with volatile content We quantify these divides in content affinity by measuring differences in the language of the comments written by active users and outsiders Concretely for each community INLINEFORM0 we define the specificity gap INLINEFORM1 as the relative difference between the average specificity of comments authored by active members and by outsiders where these measures are macroaveraged over users Large positive INLINEFORM2 then occur in communities where active users tend to engage with substantially more communityspecific content than outsiders We analogously define the volatility gap INLINEFORM0 as the relative difference in volatilities of active member and outsider comments Large positive values of INLINEFORM1 characterize communities where active users tend to have more volatile interests than outsiders while negative values indicate communities where active users tend to have more stable interests We find that in 94 of communities INLINEFORM0 indicating somewhat unsurprisingly that in almost all communities active users tend to engage with more communityspecific content than outsiders However the magnitude of this divide can vary greatly for instance in Homebrewing which is dedicated to brewing beer the divide is very pronounced INLINEFORM1 033 compared to funny a large hub where users share humorous content INLINEFORM2 0011 The nature of the volatility gap is comparatively more varied In Homebrewing INLINEFORM0 016 as in 68 of communities active users tend to write more volatile comments than outsiders INLINEFORM1 0 However communities like funny INLINEFORM2 016 where active users contribute relatively stable comments compared to outsiders INLINEFORM3 0 are also wellrepresented on Reddit To understand whether these variations manifest systematically across communities we examine the relationship between divides in content affinity and community type In particular following the intuition that active users have a relatively high affinity for a communitys niche we expect that the distinctiveness of a community will be a salient mediator of specificity and volatility gaps Indeed we find a strong correlation between a communitys distinctiveness and its specificity gap Spearmans INLINEFORM0 034 INLINEFORM1 0001 We also find a strong correlation between distinctiveness and community volatility gaps Spearmans INLINEFORM0 053 INLINEFORM1 0001 In particular we see that among the most distinctive communities ie the top third of communities by distinctiveness active users tend to write more volatile comments than outsiders mean INLINEFORM2 0098 while across the most generic communities ie the bottom third active users tend to write more stable comments mean INLINEFORM3 0047 MannWhitney U test INLINEFORM4 0001 The relative affinity of outsiders for volatile content in these communities indicates that temporally ephemeral content might serve as an entry point into such a community without necessarily engaging users in the long term Our languagebased typology and analysis of user engagement draws on and contributes to several distinct research threads in addition to the many foundational studies cited in the previous sections Multicommunity studies Our investigation of user engagement in multicommunity settings follows prior literature which has examined differences in user and community dynamics across various online groups such as email listservs Such studies have primarily related variations in user behaviour to structural features such as group size and volume of content BIBREF30 BIBREF31 BIBREF32 BIBREF33 In focusing on the linguistic content of communities we extend this research by providing a contentbased framework through which user engagement can be examined Reddit has been a particularly useful setting for studying multiple communities in prior work Such studies have mostly focused on characterizing how individual users engage across a multicommunity platform BIBREF34 BIBREF35 or on specific user engagement patterns such as loyalty to particular communities BIBREF22 We complement these studies by seeking to understand how features of communities can mediate a broad array of user engagement patterns within them Typologies of online communities Prior attempts to typologize online communities have primarily been qualitative and based on handdesigned categories making them difficult to apply at scale These typologies often hinge on having some welldefined function the community serves such as supporting a business or nonprofit cause BIBREF36 which can be difficult or impossible to identify in massive anonymous multicommunity settings Other typologies emphasize differences in communication platforms and other functional requirements BIBREF37 BIBREF38 which are important but preclude analyzing differences between communities within the same multicommunity platform Similarly previous computational methods of characterizing multiple communities have relied on the presence of markers such as affixes in community names BIBREF35 or platformspecific affordances such as evaluation mechanisms BIBREF39 Our typology is also distinguished from community detection techniques that rely on structural or functional categorizations BIBREF40 BIBREF41 While the focus of those studies is to identify and characterize subcommunities within a larger social network our typology provides a characterization of predefined communities based on the nature of their identity Broader work on collective identity Our focus on community identity dovetails with a long line of research on collective identity and user engagement in both online and offline communities BIBREF42 BIBREF1 BIBREF2 These studies focus on individuallevel psychological manifestations of collective or social identity and their relationship to user engagement BIBREF42 BIBREF43 BIBREF44 BIBREF0 In contrast we seek to characterize community identities at an aggregate level and in an interpretable manner with the goal of systematically organizing the diverse space of online communities Typologies of this kind are critical to these broader socialpsychological studies of collective identity they allow researchers to systematically analyze how the psychological manifestations and implications of collective identity vary across diverse sets of communities Our current understanding of engagement patterns in online communities is patched up from glimpses offered by several disparate studies focusing on a few individual communities This work calls into attention the need for a method to systematically reason about similarities and differences across communities By proposing a way to structure the multicommunity space we find not only that radically contrasting engagement patterns emerge in different parts of this space but also that this variation can be at least partly explained by the type of identity each community fosters Our choice in this work is to structure the multicommunity space according to a typology based on community identity as reflected in language use We show that this effectively explains crosscommunity variation of three different user engagement measuresretention acculturation and content affinityand complements measures based on activity and size with additional interpretable information For example we find that in niche communities established members are more likely to engage with volatile content than outsiders while the opposite is true in generic communities Such insights can be useful for community maintainers seeking to understand engagement patterns in their own communities One main area of future research is to examine the temporal dynamics in the multicommunity landscape By averaging our measures of distinctiveness and dynamicity across time our present study treated community identity as a static property However as communities experience internal changes and respond to external events we can expect the nature of their identity to shift as well For instance the relative consistency of harrypotter may be disrupted by the release of a new novel while Seahawks may foster different identities during and between football seasons Conversely a communitys type may also mediate the impact of new events Moving beyond a static view of community identity could enable us to better understand how temporal phenomena such as linguistic change manifest across different communities and also provide a more nuanced view of user engagementfor instance are communities more welcoming to newcomers at certain points in their lifecycle Another important avenue of future work is to explore other ways of mapping the landscape of online communities For example combining structural properties of communities BIBREF40 with topical information BIBREF35 and with our identitybased measures could further characterize and explain variations in user engagement patterns Furthermore extending the present analyses to even more diverse communities supported by different platforms eg GitHub StackExchange Wikipedia could enable the characterization of more complex behavioral patterns such as collaboration and altruism which become salient in different multicommunity landscapes The authors thank Liye Fu Jack Hessel David Jurgens and Lillian Lee for their helpful comments This research has been supported in part by a Discovery and Innovation Research Seed Award from the Office of the Vice Provost for Research at Cornell NSF CNS1010921 IIS1149837 IIS1514268 NIH BD2K ARO MURI DARPA XDATA DARPA SIMPLEX DARPA NGS2 Stanford Data Science Initiative SAP Stanford Graduate Fellowship NSERC PGSD Boeing Lightspeed and Volkswagen </s>
<s>[INST] Question Answering based Clinical Text Structuring Using Pre-trained Language Model [/INST] Clinical text structuring CTS is a critical task for fetching medical research data from electronic health records EHRs where structural patient medical data such as whether the patient has specific symptoms diseases or what the tumor size is how far from the tumor is cut at during the surgery or what the specific laboratory test result is are obtained It is important to extract structured data from clinical text because biomedical systems or biomedical researches greatly rely on structured data but they cannot obtain them directly In addition clinical text often contains abundant healthcare information CTS is able to provide largescale extracted structured data for enormous downstream clinical researches However endtoend CTS is a very challenging task Different CTS tasks often have nonuniform output formats such as specificclass classifications eg tumor stage strings in the original text eg result for a laboratory test and inferred values from part of the original text eg calculated tumor size Researchers have to construct different models for it which is already costly and hence it calls for a lot of labeled data for each model Moreover labeling necessary amount of data for training neural network requires expensive labor cost To handle it researchers turn to some rulebased structuring methods which often have lower labor cost Traditionally CTS tasks can be addressed by rule and dictionary based methods BIBREF0 BIBREF1 BIBREF2 taskspecific endtoend methods BIBREF3 BIBREF4 BIBREF5 BIBREF6 and pipeline methods BIBREF7 BIBREF8 BIBREF9 Rule and dictionary based methods suffer from costly humandesigned extraction rules while taskspecific endtoend methods have nonuniform output formats and require taskspecific training dataset Pipeline methods break down the entire process into several pieces which improves the performance and generality However when the pipeline depth grows error propagation will have a greater impact on the performance To reduce the pipeline depth and break the barrier of nonuniform output formats we present a question answering based clinical text structuring QACTS task see Fig FIGREF1 Unlike the traditional CTS task our QACTS task aims to discover the most related text from original paragraph text For some cases it is already the final answer in deed eg extracting substring While for other cases it needs several steps to obtain the final answer such as entity names conversion and negative words recognition Our presented QACTS task unifies the output format of the traditional CTS task and make the training data shareable thus enriching the training data The main contributions of this work can be summarized as follows We first present a question answering based clinical text structuring QACTS task which unifies different specific tasks and make dataset shareable We also propose an effective model to integrate clinical named entity information into pretrained language model Experimental results show that QACTS task leads to significant improvement due to shared dataset Our proposed model also achieves significantly better performance than the strong baseline methods In addition we also show that twostage training mechanism has a great improvement on QACTS task The rest of the paper is organized as follows We briefly review the related work on clinical text structuring in Section SECREF2 Then we present question answer based clinical text structuring task in Section SECREF3 In Section SECREF4 we present an effective model for this task Section SECREF5 is devoted to computational studies and several investigations on the key issues of our proposed model Finally conclusions are given in Section SECREF6 Clinical text structuring is a final problem which is highly related to practical applications Most of existing studies are casebycase Few of them are developed for the general purpose structuring task These studies can be roughly divided into three categories rule and dictionary based methods taskspecific endtoend methods and pipeline methods Rule and dictionary based methods BIBREF0 BIBREF1 BIBREF2 rely extremely on heuristics and handcrafted extraction rules which is more of an art than a science and incurring extensive trialanderror experiments Fukuda et al BIBREF0 identified protein names from biological papers by dictionaries and several features of protein names Wang et al BIBREF1 developed some linguistic rules ie normalisedexpanded term matching and substring term matching to map specific terminology to SNOMED CT Song et al BIBREF2 proposed a hybrid dictionarybased bioentity extraction technique and expands the bioentity dictionary by combining different data sources and improves the recall rate through the shortest path edit distance algorithm This kind of approach features its interpretability and easy modifiability However with the increase of the rule amount supplementing new rules to existing system will turn to be a rule disaster Taskspecific endtoend methods BIBREF3 BIBREF4 use large amount of data to automatically model the specific task Topaz et al BIBREF3 constructed an automated wound information identification model with five output Tan et al BIBREF4 identified patients undergoing radical cystectomy for bladder cancer Although they achieved good performance none of their models could be used to another task due to output format difference This makes building a new model for a new task a costly job Pipeline methods BIBREF7 BIBREF8 BIBREF9 break down the entire task into several basic natural language processing tasks Bill et al BIBREF7 focused on attributes extraction which mainly relied on dependency parsing and named entity recognition BIBREF10 BIBREF11 BIBREF12 Meanwhile Fonferko et al BIBREF9 used more components like noun phrase chunking BIBREF13 BIBREF14 BIBREF15 partofspeech tagging BIBREF16 BIBREF17 BIBREF18 sentence splitter named entity linking BIBREF19 BIBREF20 BIBREF21 relation extraction BIBREF22 BIBREF23 This kind of method focus on language itself so it can handle tasks more general However as the depth of pipeline grows it is obvious that error propagation will be more and more serious In contrary using less components to decrease the pipeline depth will lead to a poor performance So the upper limit of this method depends mainly on the worst component Recently some works focused on pretrained language representation models to capture language information from text and then utilizing the information to improve the performance of specific natural language processing tasks BIBREF24 BIBREF25 BIBREF26 BIBREF27 which makes language model a shared model to all natural language processing tasks Radford et al BIBREF24 proposed a framework for finetuning pretrained language model Peters et al BIBREF25 proposed ELMo which concatenates forward and backward language models in a shallow manner Devlin et al BIBREF26 used bidirectional Transformers to model deep interactions between the two directions Yang et al BIBREF27 replaced the fixed forward or backward factorization order with all possible permutations of the factorization order and avoided using the MASK tag which causes pretrainfinetune discrepancy that BERT is subject to The main motivation of introducing pretrained language model is to solve the shortage of labeled data and polysemy problem Although polysemy problem is not a common phenomenon in biomedical domain shortage of labeled data is always a nontrivial problem Lee et al BIBREF28 applied BERT on largescale biomedical unannotated data and achieved improvement on biomedical named entity recognition relation extraction and question answering Kim et al BIBREF29 adapted BioBERT into multitype named entity recognition and discovered new entities Both of them demonstrates the usefulness of introducing pretrained language model into biomedical domain Given a sequence of paragraph text Xx1 x2 xn clinical text structuring CTS can be regarded to extract or generate a keyvalue pair where key Q is typically a query term such as proximal resection margin and value V is a result of query term Q according to the paragraph text X Generally researchers solve CTS problem in two steps Firstly the answerrelated text is pick out And then several steps such as entity names conversion and negative words recognition are deployed to generate the final answer While final answer varies from task to task which truly causes nonuniform output formats finding the answerrelated text is a common action among all tasks Traditional methods regard both the steps as a whole In this paper we focus on finding the answerrelated substring Xs Xi Xi1 Xi2 Xj 1 i j n from paragraph text X For example given sentence UTF8gkai115cm170cm60cm80cm Distal gastrectomy specimen measuring 115cm in length along the lesser curvature 170cm in length along the greater curvature 60cm from the proximal resection margin and 80cm from the distal resection margin and query UTF8gkaiproximal resection margin the answer should be 60cm which is located in original text from index 32 to 37 With such definition it unifies the output format of CTS tasks and therefore make the training data shareable in order to reduce the training data quantity requirement Since BERT BIBREF26 has already demonstrated the usefulness of shared model we suppose extracting commonality of this problem and unifying the output format will make the model more powerful than dedicated model and meanwhile for a specific clinical task use the data for other tasks to supplement the training data In this section we present an effective model for the question answering based clinical text structuring QACTS As shown in Fig FIGREF8 paragraph text X is first passed to a clinical named entity recognition CNER model BIBREF12 to capture named entity information and obtain onehot CNER output tagging sequence for query text Inq and paragraph text Int with BIEOS Begin Inside End Outside Single tag scheme Inq and Int are then integrated together into In Meanwhile the paragraph text X and query text Q are organized and passed to contextualized representation model which is pretrained language model BERT BIBREF26 here to obtain the contextualized representation vector Vs of both text and query Afterwards Vs and In are integrated together and fed into a feed forward network to calculate the start and end index of answerrelated text Here we define this calculation problem as a classification for each word to be the start or end word For any clinical freetext paragraph X and query Q contextualized representation is to generate the encoded vector of both of them Here we use pretrained language model BERTbase BIBREF26 model to capture contextual information The text input is constructed as CLS Q SEP X SEP For Chinese sentence each word in this input will be mapped to a pretrained embedding ei To tell the model Q and X is two different sentence a sentence type input is generated which is a binary label sequence to denote what sentence each character in the input belongs to Positional encoding and mask matrix is also constructed automatically to bring in absolute position information and eliminate the impact of zero padding respectively Then a hidden vector Vs which contains both query and text information is generated through BERTbase model Since BERT is trained on general corpus its performance on biomedical domain can be improved by introducing biomedical domainspecific features In this paper we introduce clinical named entity information into the model The CNER task aims to identify and classify important clinical terms such as diseases symptoms treatments exams and body parts from Chinese EHRs It can be regarded as a sequence labeling task A CNER model typically outputs a sequence of tags Each character of the original sentence will be tagged a label following a tag scheme In this paper we recognize the entities by the model of our previous work BIBREF12 but trained on another corpus which has 44 entity types including operations numbers unit words examinations symptoms negative words etc An illustrative example of named entity information sequence is demonstrated in Table TABREF2 In Table TABREF2 UTF8gkai is tagged as an operation 115 is a number word and cm is an unit word The named entity tag sequence is organized in onehot type We denote the sequence for clinical sentence and query term as Int and Inq respectively There are two ways to integrate two named entity information vectors Int and Inq or hidden contextualized representation Vs and named entity information In where In Int Inq The first one is to concatenate them together because they have sequence output with a common dimension The second one is to transform them into a new hidden representation For the concatenation method the integrated representation is described as follows While for the transformation method we use multihead attention BIBREF30 to encode the two vectors It can be defined as follows where h is the number of heads and Wo is used to projects back the dimension of concatenated matrix Attention denotes the traditional attention and it can be defined as follows where dk is the length of hidden vector The final step is to use integrated representation Hi to predict the start and end index of answerrelated text Here we define this calculation problem as a classification for each word to be the start or end word We use a feed forward network FFN to compress and calculate the score of each word Hf which makes the dimension to leftlangle ls 2rightrangle where ls denotes the length of sequence Then we permute the two dimensions for softmax calculation The calculation process of loss function can be defined as followed where Os softmaxpermuteHf0 denotes the probability score of each word to be the start word and similarly Oe softmaxpermuteHf1 denotes the end ys and ye denotes the true answer of the output for start word and end word respectively Twostage training mechanism is previously applied on bilinear model in finegrained visual recognition BIBREF31 BIBREF32 BIBREF33 Two CNNs are deployed in the model One is trained at first for coarsegraind features while freezing the parameter of the other Then unfreeze the other one and train the entire model in a low learning rate for fetching finegrained features Inspired by this and due to the large amount of parameters in BERT model to speed up the training process we fine tune the BERT model with new prediction layer first to achieve a better contextualized representation performance Then we deploy the proposed model and load the fine tuned BERT weights attach named entity information layers and retrain the model In this section we devote to experimentally evaluating our proposed task and approach The best results in tables are in bold Our dataset is annotated based on Chinese pathology reports provided by the Department of Gastrointestinal Surgery Ruijin Hospital It contains 17833 sentences 826987 characters and 2714 questionanswer pairs All questionanswer pairs are annotated and reviewed by four clinicians with three types of questions namely tumor size proximal resection margin and distal resection margin These annotated instances have been partitioned into 1899 training instances 12412 sentences and 815 test instances 5421 sentences Each instance has one or several sentences Detailed statistics of different types of entities are listed in Table TABREF20 In the following experiments two widelyused performance measures ie EMscore BIBREF34 and macroaveraged F1score BIBREF35 are used to evaluate the methods The Exact Match EMscore metric measures the percentage of predictions that match any one of the ground truth answers exactly The F1score metric is a looser metric measures the average overlap between the prediction and ground truth answer To implement deep neural network models we utilize the Keras library BIBREF36 with TensorFlow BIBREF37 backend Each model is run on a single NVIDIA GeForce GTX 1080 Ti GPU The models are trained by Adam optimization algorithm BIBREF38 whose parameters are the same as the default settings except for learning rate set to 5times 105 Batch size is set to 3 or 4 due to the lack of graphical memory We select BERTbase as the pretrained language model in this paper Due to the high cost of pretraining BERT language model we directly adopt parameters pretrained by Google in Chinese general corpus The named entity recognition is applied on both pathology report texts and query texts Since BERT has already achieved the stateoftheart performance of questionanswering in this section we compare our proposed model with stateoftheart question answering models ie QANet BIBREF39 and BERTBase BIBREF26 As BERT has two versions BERTBase and BERTLarge due to the lack of computational resource we can only compare with BERTBase model instead of BERTLarge Prediction layer is attached at the end of the original BERTBase model and we fine tune it on our dataset In this section the named entity integration method is chosen to pure concatenation Concatenate the named entity information on pathology report text and query text first and then concatenate contextualized representation and concatenated named entity information Comparative results are summarized in Table TABREF23 Table TABREF23 indicates that our proposed model achieved the best performance both in EMscore and F1score with EMscore of 9184 and F1score of 9375 QANet outperformed BERTBase with 356 score in F1score but underperformed it with 075 score in EMscore Compared with BERTBase our model led to a 564 performance improvement in EMscore and 369 in F1score Although our model didnt outperform much with QANet in F1score only 013 our model significantly outperformed it with 639 score in EMscore To further investigate the effects of named entity information and twostage training mechanism for our model we apply ablation analysis to see the improvement brought by each of them where times refers to removing that part from our model As demonstrated in Table TABREF25 with named entity information enabled twostage training mechanism improved the result by 436 in EMscore and 38 in F1score Without twostage training mechanism named entity information led to an improvement by 128 in EMscore but it also led to a weak deterioration by 012 in F1score With both of them enabled our proposed model achieved a 564 score improvement in EMscore and a 369 score improvement in F1score The experimental results show that both named entity information and twostage training mechanism are helpful to our model There are two methods to integrate named entity information into existing model we experimentally compare these two integration methods As named entity recognition has been applied on both pathology report text and query text there will be two integration here One is for two named entity information and the other is for contextualized representation and integrated named entity information For multihead attention BIBREF30 we set heads number h 16 with 256dimension hidden vector size for each head From Table TABREF27 we can observe that applying concatenation on both periods achieved the best performance on both EMscore and F1score Unfortunately applying multihead attention on both period one and period two can not reach convergence in our experiments This probably because it makes the model too complex to train The difference on other two methods are the order of concatenation and multihead attention Applying multihead attention on two named entity information Int and Inq first achieved a better performance with 8987 in EMscore and 9288 in F1score Applying Concatenation first can only achieve 8074 in EMscore and 8442 in F1score This is probably due to the processing depth of hidden vectors and dataset size BERTs output has been modified after many layers but named entity information representation is very close to input With big amount of parameters in multihead attention it requires massive training to find out the optimal parameters However our dataset is significantly smaller than what pretrained BERT uses This probably can also explain why applying multihead attention method on both periods can not converge Although Table TABREF27 shows the best integration method is concatenation multihead attention still has great potential Due to the lack of computational resources our experiment fixed the head number and hidden vector size However tuning these hyper parameters may have impact on the result Tuning integration method and try to utilize larger datasets may give help to improving the performance To investigate how shared task and shared model can benefit we split our dataset by query types train our proposed model with different datasets and demonstrate their performance on different datasets Firstly we investigate the performance on model without twostage training and named entity information As indicated in Table TABREF30 The model trained by mixed data outperforms 2 of the 3 original tasks in EMscore with 8155 for proximal resection margin and 8685 for distal resection margin The performance on tumor size declined by 157 score in EMscore and 314 score in F1score but they were still above 90 069 and 037 score improvement in EMscore was brought by shared model for proximal and distal resection margin prediction Meanwhile F1score for those two tasks declined 311 and 077 score Then we investigate the performance on model with twostage training and named entity information In this experiment pretraining process only use the specific dataset not the mixed data From Table TABREF31 we can observe that the performance on proximal and distal resection margin achieved the best performance on both EMscore and F1score Compared with Table TABREF30 the best performance on proximal resection margin improved by 69 in EMscore and 794 in F1score Meanwhile the best performance on distal resection margin improved by 556 in EMscore and 632 in F1score Other performances also usually improved a lot This proves the usefulness of twostage training and named entity information as well Lastly we fine tune the model for each task with a pretrained parameter Table TABREF32 summarizes the result Add some explanations for the Table TABREF32 Comparing Table TABREF32 with Table TABREF31 using mixeddata pretrained parameters can significantly improve the model performance than taskspecific data trained model Except tumor size the result was improved by 052 score in EMscore 139 score in F1score for proximal resection margin and 26 score in EMscore 296 score in F1score for distal resection margin This proves mixeddata pretrained parameters can lead to a great benefit for specific task Meanwhile the model performance on other tasks which are not trained in the final stage was also improved from around 0 to 60 or 70 percent This proves that there is commonality between different tasks and our proposed QACTS task make this learnable In conclusion to achieve the best performance for a specific dataset pretraining the model in multiple datasets and then fine tuning the model on the specific dataset is the best way In this paper we present a question answering based clinical text structuring QACTS task which unifies different clinical text structuring tasks and utilize different datasets A novel model is also proposed to integrate named entity information into a pretrained language model and adapt it to QACTS task Initially sequential results of named entity recognition on both paragraph and query texts are integrated together Contextualized representation on both paragraph and query texts are transformed by a pretrained language model Then the integrated named entity information and contextualized representation are integrated together and fed into a feed forward network for final prediction Experimental results on realworld dataset demonstrate that our proposed model competes favorably with strong baseline models in all three specific tasks The shared task and shared model introduced by QACTS task has also been proved to be useful for improving the performance on most of the taskspecific datasets In conclusion the best way to achieve the best performance for a specific dataset is to pretrain the model in multiple datasets and then fine tune it on the specific dataset We would like to thank Ting Li and Xizhou Hong Ruijin Hospital who have helped us very much in data fetching and data cleansing This work is supported by the National Key RD Program of China for Precision Medical Research No 2018YFC0910500 </s>
<s>[INST] Progress and Tradeoffs in Neural Language Models [/INST] Deep learning has unquestionably advanced the state of the art in many natural language processing tasks from syntactic dependency parsing BIBREF0 to namedentity recognition BIBREF1 to machine translation BIBREF2 The same certainly applies to language modeling where recent advances in neural language models NLMs have led to dramatically better approaches as measured using standard metrics such as perplexity BIBREF3 BIBREF4 Specifically focused on language modeling this paper examines an issue that to our knowledge has not been explored advances in neural language models have come at a significant cost in terms of increased computational complexity Computing the probability of a token sequence using nonneural techniques requires a number of phrase lookups and perhaps a few arithmetic operations whereas model inference with NLMs require large matrix multiplications consuming perhaps millions of floating point operations FLOPs These performance tradeoffs are worth discussing In truth language models exist in a qualityperformance tradeoff space As model quality increases eg lower perplexity performance as measured in terms of energy consumption query latency etc tends to decrease For applications primarily running in the cloudsay machine translationpractitioners often solely optimize for the lowest perplexity This is because such applications are embarrassingly parallel and hence trivial to scale in a data center environment There are however applications of NLMs that require less onesided optimizations On mobile devices such as smartphones and tablets for example NLMs may be integrated into software keyboards for nextword prediction allowing much faster text entry Popular Android apps that enthusiastically tout this technology include SwiftKey and Swype The greater computational costs of NLMs lead to higher energy usage in model inference translating into shorter battery life In this paper we examine the qualityperformance tradeoff in the shift from nonneural to neural language models In particular we compare KneserNey smoothing widely accepted as the state of the art prior to NLMs to the best NLMs today The decrease in perplexity on standard datasets has been well documented BIBREF3 but to our knowledge no one has examined the performances tradeoffs With deployment on a mobile device in mind we evaluate energy usage and inference latency on a Raspberry Pi which shares the same ARM architecture as nearly all smartphones today We find that a 25 times reduction in perplexity on PTB comes at a staggering cost in terms of performance inference with NLMs takes 49 times longer and requires 32 times more energy Furthermore we find that impressive reductions in perplexity translate into at best modest improvements in nextword prediction which is arguable a better metric for evaluating software keyboards on a smartphone The contribution of this paper is the first known elucidation of this qualityperformance tradeoff Note that we refrain from prescriptive recommendations whether or not a tradeoff is worthwhile depends on the application Nevertheless NLP engineers should arguably keep these tradeoffs in mind when selecting a particular operating point BIBREF3 evaluate recent neural language models however their focus is not on the computational footprint of each model but rather the perplexity To further reduce perplexity many neural language model extensions exist such as continuous cache pointer BIBREF5 and mixture of softmaxes BIBREF6 Since our focus is on comparing core neural and nonneural approaches we disregard these extra optimizations techniques in all of our models Other work focus on designing lightweight models for resourceefficient inference on mobile devices BIBREF7 explore LSTMs BIBREF8 with binary weights for language modeling BIBREF9 examine shallow feedforward neural networks for natural language processing AWDLSTM BIBREF4 show that a simple threelayer LSTM with proper regularization and optimization techniques can achieve state of the art on various language modeling datasets surpassing more complex models Specifically BIBREF4 apply randomized backpropagation through time variational dropout activation regularization embedding dropout and temporal activation regularization A novel scheduler for optimization nonmonotonically triggered ASGD NTASGD is also introduced BIBREF4 name their threelayer LSTM model trained with such tricks AWDLSTM QuasiRecurrent Neural Networks Quasirecurrent neural networks QRNNs BIBREF10 achieve current state of the art in wordlevel language modeling BIBREF11 A quasirecurrent layer comprises two separate parts a convolution layer with three weights and a recurrent pooling layer Given an input mathbf X in mathbb Rk times n the convolution layer is
mathbf Z tanh mathbf Wz cdot mathbf X
mathbf F sigma mathbf Wf cdot mathbf X
mathbf O sigma mathbf Wo cdot mathbf X
where sigma denotes the sigmoid function cdot represents masked convolution across time and mathbf Wlbrace z f orbrace in mathbb Rm times k times r are convolution weights with k input channels m output channels and a window size of r In the recurrent pooling layer the convolution outputs are combined sequentially
mathbf ct mathbf ft odot mathbf ct1 1
mathbf ft odot mathbf zt
mathbf ht mathbf ot odot mathbf ct
Multiple QRNN layers can be stacked for deeper hierarchical representation with the output mathbf h1t being fed as the input into the subsequent layer In language modeling a fourlayer QRNN is a standard architecture BIBREF11 PerplexityRecall Scale Wordlevel perplexity does not have a strictly monotonic relationship with recallat k the fraction of top k predictions that contain the correct word A given R k imposes a weak minimum perplexity constraintthere are many free parameters that allow for large variability in the perplexity given a certain R k Consider the corpus choo choo train with an associated unigram model Ptextchoo 01 Ptexttrain 09 resulting in an R1 of 13 and perplexity of 48 Clearly R1 13 for all Ptextchoo le 05 thus perplexity can drop as low as 2 without affecting recall We conducted our experiments on Penn Treebank PTB BIBREF12 and WikiText103 WT103 BIBREF13 Preprocessed by BIBREF14 PTB contains 887K tokens for training 70K for validation and 78K for test with a vocabulary size of 10000 On the other hand WT103 comprises 103 million tokens for training 217K for validation and 245K for test spanning a vocabulary of 267K unique tokens For the neural language model we used a fourlayer QRNN BIBREF10 which achieves stateoftheart results on a variety of datasets such as WT103 BIBREF11 and PTB To compare against more common LSTM architectures we also evaluated AWDLSTM BIBREF4 on PTB For the nonneural approach we used a standard fivegram model with modified KneserNey smoothing BIBREF15 as explored in BIBREF16 on PTB We denote the QRNN models for PTB and WT103 as ptbqrnn and wt103qrnn respectively For each model we examined wordlevel perplexity R3 in nextword prediction latency msq and energy usage mJq To explore the perplexityrecall relationship we collected individual perplexity and recall statistics for each sentence in the test set The QRNN models followed the exact training procedure and architecture delineated in the official codebase from BIBREF11 For ptbqrnn we trained the model for 550 epochs using NTASGD BIBREF4 then finetuned for 300 epochs using ASGD BIBREF17 all with a learning rate of 30 throughout For wt103qrnn we followed BIBREF11 and trained the QRNN for 14 epochs using the Adam optimizer with a learning rate of 103 We also applied regularization techniques from BIBREF4 all the specific hyperparameters are the same as those in the repository Our model architecture consists of 400dimensional tied embedding weights BIBREF18 and four QRNN layers with 1550 hidden units per layer on PTB and 2500 per layer on WT103 Both QRNN models have window sizes of r2 for the first layer and r1 for the rest For the KN5 model we trained an offtheshelf fivegram model using the popular SRILM toolkit BIBREF19 We did not specify any special hyperparameters We trained the QRNNs with PyTorch 040 commit 1807bac on a Titan V GPU To evaluate the models under a resourceconstrained environment we deployed them on a Raspberry Pi 3 Model B running Raspbian Stretch 4941v7 The Raspberry Pi RPi is not only a standard platform but also a close surrogate to mobile phones using the same CortexA7 in many phones We then transferred the trained models to the RPi using the same frameworks for evaluation We plugged the RPi into a Watts Up Pro meter a power meter that can be read programatically over USB at a frequency of 1 Hz For the QRNNs we used the first 350 words of the test set and averaged the msquery and mJquery For KN5 we used the entire test set for evaluation since the latency was much lower To adjust for the base power load we subtracted idle power draw from energy usage For a different perspective we further evaluated all the models under a desktop environment using an i74790k CPU and Titan V GPU Because the base power load for powering a desktop is much higher than running neural language models we collected only latency statistics We used the entire test set since the QRNN runs quickly In addition to energy and latency another consideration for the NLP developer selecting an operating point is the cost of underlying hardware For our setup the RPi costs 35 USD the CPU costs 350 USD and the GPU costs 3000 USD To demonstrate the effectiveness of the QRNN models we present the results of past and current stateoftheart neural language models in Table 1 we report the Skip and AWDLSTM results as seen in the original papers while we report our QRNN results Skip LSTM denotes the fourlayer Skip LSTM in BIBREF3 BIBREF20 focus on Hebbian softmax a model extension techniqueRaeLSTM refers to their base LSTM model without any extensions In our results KN5 refers to the traditional fivegram model with modified KneserNey smoothing and AWD is shorthand for AWDLSTM Perplexityrecall scale In Figure 1 using KN5 as the model we plot the log perplexity cross entropy and R3 error 1 textR3 for every sentence in PTB and WT103 The horizontal clusters arise from multiple perplexity points representing the same R3 value as explained in Section Infrastructure We also observe that the perplexityrecall scale is nonlinearinstead log perplexity appears to have a moderate linear relationship with R3 error on PTB r085 and an even stronger relationship on WT103 r094 This is partially explained by WT103 having much longer sentences and thus less noisy statistics From Figure 1 we find that QRNN models yield strongly linear log perplexityrecall plots as well where r088 and r093 for PTB and WT103 respectively Note that due to the improved model quality over KN5 the point clouds are shifted downward compared to Figure 1 We conclude that log perplexity or cross entropy provides a more humanunderstandable indicator of R3 than perplexity does Overall these findings agree with those from BIBREF21 which explores the log perplexityword error rate scale in language modeling for speech recognition Qualityperformance tradeoff In Table 2 from left to right we report perplexity results on the validation and test sets R3 on test and finally perquery latency and energy usage On the RPi KN5 is both fast and powerefficient to run using only about 7 msquery and 6 mJquery for PTB Table 2 row 1 and 264 msq and 229 mJq on WT103 row 5 Taking 220 msquery and consuming 300 mJquery AWDLSTM and ptbqrnn are still viable for mobile phones The modern smartphone holds upwards of 10000 joules BIBREF22 and the latency is within usability standards BIBREF23 Nevertheless the models are still 49 times slower and 32 times more powerhungry than KN5 The wt103qrnn model is completely unusable on phones taking over 12 seconds per nextword prediction Neural models achieve perplexity drops of 6080 and R3 increases of 2234 but these improvements come at a much higher cost in latency and energy usage In Table 2 last two columns the desktop yields very different results the neural models on PTB rows 23 are 9 times slower than KN5 but the absolute latency is only 8 msq which is still much faster than what humans perceive as instantaneous BIBREF23 If a highend commodity GPU is available then the models are only twice as slow as KN5 is From row 5 even better results are noted with wt103qrnn On the CPU the QRNN is only 60 slower than KN5 is while the model is faster by 11 times on a GPU These results suggest that if only latency is considered under a commodity desktop environment the QRNN model is humanly indistinguishable from the KN5 model even without using GPU acceleration In the present work we describe and examine the tradeoff space between quality and performance for the task of language modeling Specifically we explore the qualityperformance tradeoffs between KN5 a nonneural approach and AWDLSTM and QRNN two neural language models We find that with decreased perplexity comes vastly increased computational requirements In one of the NLMs a perplexity reduction by 25 times results in a 49 times rise in latency and 32 times increase in energy usage when compared to KN5 </s>
<s>[INST] Stay On-Topic: Generating Context-specific Fake Restaurant Reviews [/INST] Automatically generated fake reviews have only recently become natural enough to fool human readers Yao et al BIBREF0 use a deep neural network a socalled 2layer LSTM BIBREF1 to generate fake reviews and concluded that these fake reviews look sufficiently genuine to fool native English speakers They train their model using real restaurant reviews from yelpcom BIBREF2 Once trained the model is used to generate reviews characterbycharacter Due to the generation methodology it cannot be easily targeted for a specific context meaningful side information Consequently the review generation process may stray offtopic For instance when generating a review for a Japanese restaurant in Las Vegas the review generation process may include references to an Italian restaurant in Baltimore The authors of BIBREF0 apply a postprocessing step customization which replaces foodrelated words with more suitable ones sampled from the targeted restaurant The word replacement strategy has drawbacks it can miss certain words and replace others independent of their surrounding words which may alert savvy readers As an example when we applied the customization technique described in BIBREF0 to a review for a Japanese restaurant it changed the snippet garlic knots for breakfast with garlic knots for sushi We propose a methodology based on neural machine translation NMT that improves the generation process by defining a context for the each generated fake review Our context is a cleartext sequence of the review rating restaurant name city state and food tags eg Japanese Italian We show that our technique generates review that stay on topic We can instantiate our basic technique into several variants We vet them on Amazon Mechanical Turk and find that native English speakers are very poor at recognizing our fake generated reviews For one variant the participants performance is close to random the classaveraged Fscore of detection is INLINEFORM0 whereas random would be INLINEFORM1 given the 16 imbalance in the test Via a user study with experienced highly educated participants we compare this variant which we will henceforth refer to as NMTFake reviews with fake reviews generated using the charLSTMbased technique from BIBREF0 We demonstrate that NMTFake reviews constitute a new category of fake reviews that cannot be detected by classifiers trained only using previously known categories of fake reviews BIBREF0 BIBREF3 BIBREF4 Therefore NMTFake reviews may go undetected in existing online review sites To meet this challenge we develop an effective classifier that detects NMTFake reviews effectively 97 Fscore Our main contributions are Fake reviews Usergenerated content BIBREF5 is an integral part of the contemporary user experience on the web Sites like tripadvisorcom yelpcom and Google Play use userwritten reviews to provide rich information that helps other users choose where to spend money and time User reviews are used for rating services or products and for providing qualitative opinions User reviews and ratings may be used to rank services in recommendations Ratings have an affect on the outwards appearance Already 8 years ago researchers estimated that a onestar rating increase affects the business revenue by 5 9 on yelpcom BIBREF6 Due to monetary impact of usergenerated content some businesses have relied on socalled crowdturfing agents BIBREF7 that promise to deliver positive ratings written by workers to a customer in exchange for a monetary compensation Crowdturfing ethics are complicated For example Amazon community guidelines prohibit buying content relating to promotions but the act of writing fabricated content is not considered illegal nor is matching workers to customers BIBREF8 Year 2015 approximately 20 of online reviews on yelpcom were suspected of being fake BIBREF9 Nowadays usergenerated review sites like yelpcom use filters and fraudulent review detection techniques These factors have resulted in an increase in the requirements of crowdturfed reviews provided to review sites which in turn has led to an increase in the cost of highquality review Due to the cost increase researchers hypothesize the existence of neural networkgenerated fake reviews These neuralnetworkbased fake reviews are statistically different from humanwritten fake reviews and are not caught by classifiers trained on these BIBREF0 Detecting fake reviews can either be done on an individual level or as a systemwide detection tool ie regulation Detecting fake online content on a personal level requires knowledge and skills in critical reading In 2017 the National Literacy Trust assessed that young people in the UK do not have the skillset to differentiate fake news from real news BIBREF10 For example 20 of children that use online news sites in age group 1215 believe that all information on news sites are true Neural Networks Neural networks are function compositions that map input data through INLINEFORM0 subsequent layers DISPLAYFORM0 where the functions INLINEFORM0 are typically nonlinear and chosen by experts partly for known good performance on datasets and partly for simplicity of computational evaluation Language models LMs BIBREF11 are generative probability distributions that assign probabilities to sequences of tokens INLINEFORM1 DISPLAYFORM0 such that the language model can be used to predict how likely a specific token at time step INLINEFORM0 is based on the INLINEFORM1 previous tokens Tokens are typically either words or characters For decades deep neural networks were thought to be computationally too difficult to train However advances in optimization hardware and the availability of frameworks have shown otherwise BIBREF1 BIBREF12 Neural language models NLMs have been one of the promising application areas NLMs are typically various forms of recurrent neural networks RNNs which pass through the data sequentially and maintain a memory representation of the past tokens with a hidden context vector There are many RNN architectures that focus on different ways of updating and maintaining context vectors Long ShortTerm Memory units LSTM and Gated Recurrent Units GRUs are perhaps most popular Neural LMs have been used for freeform text generation In certain application areas the quality has been high enough to sometimes fool human readers BIBREF0 Encoderdecoder seq2seq models BIBREF13 are architectures of stacked RNNs which have the ability to generate output sequences based on input sequences The encoder network reads in a sequence of tokens and passes it to a decoder network a LM In contrast to simpler NLMs encoderdecoder networks have the ability to use additional context for generating text which enables more accurate generation of text Encoderdecoder models are integral in Neural Machine Translation NMT BIBREF14 where the task is to translate a source text from one language to another language NMT models additionally use beam search strategies to heuristically search the set of possible translations Training datasets are parallel corpora large sets of paired sentences in the source and target languages The application of NMT techniques for online machine translation has significantly improved the quality of translations bringing it closer to human performance BIBREF15 Neural machine translation models are efficient at mapping one expression to another onetoone mapping Researchers have evaluated these models for conversation generation BIBREF16 with mixed results Some researchers attribute poor performance to the use of the negative log likelihood cost function during training which emphasizes generation of highconfidence phrases rather than diverse phrases BIBREF17 The results are often generic text which lacks variation Li et al have suggested various augmentations to this among others suppressing typical responses in the decoder language model to promote response diversity BIBREF17 We discuss the attack model our generative machine learning method and controlling the generative process in this section Wang et al BIBREF7 described a model of crowdturfing attacks consisting of three entities customers who desire to have fake reviews for a particular target eg their restaurant on a particular platform eg Yelp agents who offer fake review services to customers and workers who are orchestrated by the agent to compose and post fake reviews Automated crowdturfing attacks ACA replace workers by a generative model This has several benefits including better economy and scalability human workers are more expensive and slower and reduced detectability agent can better control the rate at which fake reviews are generated and posted We assume that the agent has access to public reviews on the review platform by which it can train its generative model We also assume that it is easy for the agent to create a large number of accounts on the review platform so that accountbased detection or ratelimiting techniques are ineffective against fake reviews The quality of the generative model plays a crucial role in the attack Yao et al BIBREF0 propose the use of a characterbased LSTM as base for generative model LSTMs are not conditioned to generate reviews for a specific target BIBREF1 and may mixup concepts from different contexts during freeform generation Mixing contextually separate words is one of the key criteria that humans use to identify fake reviews These may result in violations of known indicators for fake content BIBREF18 For example the review content may not match prior expectations nor the information need that the reader has We improve the attack model by considering a more capable generative model that produces more appropriate reviews a neural machine translation NMT model We propose the use of NMT models for fake review generation The method has several benefits 1 the ability to learn how to associate context keywords to reviews 2 fast training time and 3 a highdegree of customization during production time eg introduction of specific waiter or food items names into reviews NMT models are constructions of stacked recurrent neural networks RNNs They include an encoder network and a decoder network which are jointly optimized to produce a translation of one sequence to another The encoder rolls over the input data in sequence and produces one INLINEFORM0 dimensional context vector representation for the sentence The decoder then generates output sequences based on the embedding vector and an attention module which is taught to associate output words with certain input words The generation typically continues until a specific EOS end of sentence token is encountered The review length can be controlled in many ways eg by setting the probability of generating the EOS token to zero until the required length is reached NMT models often also include a beam search BIBREF14 which generates several hypotheses and chooses the best ones amongst them In our work we use the greedy beam search technique We forgo the use of additional beam searches as we found that the quality of the output was already adequate and the translation phase time consumption increases linearly for each beam used We use the Yelp Challenge dataset BIBREF2 for our fake review generation The dataset Aug 2017 contains 29 million 1 5 star restaurant reviews We treat all reviews as genuine humanwritten reviews for the purpose of this work since widescale deployment of machinegenerated review attacks are not yet reported Sep 2017 BIBREF19 As preprocessing we remove nonprintable nonASCII characters and excessive whitespace We separate punctuation from words We reserve 15000 reviews for validation and 3000 for testing and the rest we use for training NMT models require a parallel corpus of source and target sentences ie a large set of source targetpairs We set up a parallel corpus by constructing context reviewpairs from the dataset Next we describe how we created our input context The Yelp Challenge dataset includes metadata about restaurants including their names food tags cities and states these restaurants are located in For each restaurant review we fetch this metadata and use it as our input context in the NMT model The corresponding restaurant review is similarly set as the target sentence This method produced 29 million pairs of sentences in our parallel corpus We show one example of the parallel training corpus in Example 1 below 5 Public House Las Vegas NV Gastropubs Restaurants Excellent food and service Pricey but well worth it I would recommend the bone marrow and sampler platter for appetizers endverbatim noindent The order textbfrating name city state tags is kept constant Training the model conditions it to associate certain sequences of words in the input sentence with others in the output subsubsectionTraining Settings We train our NMT model on a commodity PC with a i74790k CPU 400GHz with 32GB RAM and one NVidia GeForce GTX 980 GPU Our system can process approximately 1300 textendash 1500 source tokenss and approximately 5730 textendash 5830 output tokenss Training one epoch takes in average 72 minutes The model is trained for 8 epochs ie over night We call fake review generated by this model emphNMTFake reviews We only need to train one model to produce reviews of different ratings We use the training settings adam optimizer citekingma2014adam with the suggested learning rate 0001 citeklein2017opennmt For most parts parameters are at their default values Notably the maximum sentence length of input and output is 50 tokens by default We leverage the framework openNMTpy citeklein2017opennmt to teach the our NMT model We list used openNMTpy commands in Appendix TablereftableopenNMTpycommands beginfiguret begincenter begintabular l hline Example 2 Greedy NMT Great food underlinegreat service underlinegreat textittextitbeer selection I had the textitGastropubs burger and it was delicious The underlinetextitbeer selection was also underlinegreat Example 3 NMTFake I love this restaurant Great food great service Its textita little pricy but worth it for the textitquality of the textitbeer and atmosphere you can see in textitVegas hline endtabular labeltableoutputcomparison endcenter captionNaive text generation with NMT vs generation using our NTM model Repetitive patterns are underlineunderlined Contextual words are emphitalicized Both examples here are generated based on the context given in Example1 labelfigcomparison endfigure subsectionControlling generation of fake reviews labelsecgenerating Greedy NMT beam searches are practical in many NMT cases However the results are simply repetitive when naively applied to fake review generation See Example2 in Figurereffigcomparison The NMT model produces many emphhighconfidence word predictions which are repetitive and obviously fake We calculated that in fact 43 of the generated sentences started with the phrase Great food The lack of diversity in greedy use of NMTs for text generation is clear beginalgorithmb KwDataDesired review context Cmathrminput given as cleartext NMT model KwResultGenerated review out for input context Cmathrminput set b03 lambda5 alphafrac23 pmathrmtypo pmathrmspell log p leftarrow textNMTdecodeNMTencodeCmathrminputtext out leftarrow i leftarrow 0 log p leftarrow textAugmentlog p b lambda 1 0 random penalty Whilei0 or oi not EOS log Tildep leftarrow textAugmentlog p b lambda alpha oi i start memory penalty oi leftarrow textNMTbeamlog Tildep out outappendoi i leftarrow i1 textreturntextObfuscateoutpmathrmtypopmathrmspell captionGeneration of NMTFake reviews labelalgbase endalgorithm In this work we describe how we succeeded in creating more diverse and less repetitive generated reviews such as Example 3 in Figurereffigcomparison We outline pseudocode for our methodology of generating fake reviews in Algorithmrefalgbase There are several parameters in our algorithm The details of the algorithm will be shown later We modify the openNMTpy translation phase by changing logprobabilities before passing them to the beam search We notice that reviews generated with openNMTpy contain almost no language errors As an optional postprocessing step we obfuscate reviews by introducing natural typosmisspellings randomly In the next sections we describe how we succeeded in generating more natural sentences from our NMT model ie generating reviews like Example3 instead of reviews like Example2 subsubsectionVariation in word content Example 2 in Figurereffigcomparison repeats commonly occurring words given for a specific context eg textitgreat food service beer selection burger for Example1 Generic review generation can be avoided by decreasing probabilities loglikelihoods citemurphy2012machine of the generators LM the decoder We constrain the generation of sentences by randomly emphimposing penalties to words We tried several forms of added randomness and found that adding constant penalties to a emphrandom subset of the target words resulted in the most natural sentence flow We call these penalties emphBernoulli penalties since the random variables are chosen as either 1 or 0 on or off paragraphBernoulli penalties to language model To avoid generic sentences components we augment the default language model pcdot of the decoder by beginequation log Tildeptk log ptk ti dots t1 lambda q endequation where q in RV is a vector of Bernoullidistributed random values that obtain values 1 with probability b and value 0 with probability 1bi and lambda 0 Parameter b controls how much of the vocabulary is forgotten and lambda is a soft penalty of including forgotten words in a review lambda qk emphasizes sentence forming with nonpenalized words The randomness is reset at the start of generating a new review Using Bernoulli penalties in the language model we can forget a certain proportion of words and essentially force the creation of less typical sentences We will test the effect of these two parameters the Bernoulli probability b and loglikelihood penalty of including forgotten words lambda with a user study in Sectionrefsecvarying paragraphStart penalty We introduce start penalties to avoid generic sentence starts eg Great food great service Inspired by citeli2016diversity we add a random start penalty lambda smathrmi to our language model which decreases monotonically for each generated token We set alpha leftarrow 066 as its effect decreases by 90 every 5 words generated paragraphPenalty for reusing words Bernoulli penalties do not prevent excessive use of certain words in a sentence such as textitgreat in Example2 To avoid excessive reuse of words we included a memory penalty for previously used words in each translation Concretely we add the penalty lambda to each word that has been generated by the greedy search subsubsectionImproving sentence coherence labelsecgrammar We visually analyzed reviews after applying these penalties to our NMT model While the models were clearly diverse they were emphincoherent the introduction of random penalties had degraded the grammaticality of the sentences Amongst others the use of punctuation was erratic and pronouns were used semantically wrongly eg emphhe emphshe might be replaced as could andbut To improve the authenticity of our reviews we added several emphgrammarbased rules English language has several classes of words which are important for the natural flow of sentences We built a list of common pronouns eg I them our conjunctions eg and thus if punctuation eg and apply only half memory penalties for these words We found that this change made the reviews more coherent The pseudocode for this and the previous step is shown in Algorithmrefalgaug The combined effect of grammarbased rules and LM augmentation is visible in Example3 Figurereffigcomparison beginalgorithmt KwDataInitial log LM log p Bernoulli probability b softpenalty lambda monotonic factor alpha last generated token oi grammar rules set G KwResultAugmented log LM log Tildep beginalgorithmic1 Procedure Augmentlog p b lambda alpha oi i generate Pmathrm1N leftarrow BernoullibtextOne value in 01textper token I leftarrow P0 Select positive indices log Tildep leftarrow textDiscountlog p I lambda cdot alphaiG start penalty log Tildep leftarrow textDiscountlog Tildep oi lambdaG memory penalty textbfreturnlog Tildep EndProcedure Procedure Discountlog p I lambda G StateFori in I eIfoi in G log pi leftarrow log pi lambda2 log pi leftarrow log pi lambda textbfreturnlog p EndProcedure endalgorithmic captionPseudocode for augmenting language model labelalgaug endalgorithm subsubsectionHumanlike errors labelsecobfuscation We notice that our NMT model produces reviews without grammar mistakes This is unlike real human writers whose sentences contain two types of language mistakes 1 emphtypos that are caused by mistakes in the human motoric input and 2 emphcommon spelling mistakes We scraped a list of common English language spelling mistakes from Oxford dictionaryfootnoteurlhttpsenoxforddictionariescomspellingcommonmisspellings and created 80 rules for randomly emphreintroducing spelling mistakes Similarly typos are randomly reintroduced based on the weighted edit distancefootnoteurlhttpspypipythonorgpypiweightedlevenshtein01 such that typos resulting in real English words with small perturbations are emphasized We use autocorrection toolsfootnoteurlhttpspypipythonorgpypiautocorrect010 for finding these words We call these augmentations emphobfuscations since they aim to confound the reader to think a human has written them We omit the pseudocode description for brevity subsectionExperiment Varying generation parameters in our NMT model labelsecvarying Parameters b and lambda control different aspects in fake reviews We show six different examples of generated fake reviews in Tablereftablecategories Here the largest differences occur with increasing values of b visibly the restaurant reviews become more extreme This occurs because a large portion of vocabulary is forgotten Reviews with b geq 07 contain more rare word combinations eg as punctuation and they occasionally break grammaticality experience was awesome Reviews with lower b are more generic they contain safe word combinations like Great place good service that occur in many reviews Parameter lambdas is more subtle it affects how random review starts are and to a degree the discontinuation between statements within the review We conducted an Amazon Mechanical Turk MTurk survey in order to determine what kind of NMTFake reviews are convincing to native English speakers We describe the survey and results in the next section begintableb captionSix different parametrizations of our NMT reviews and one example for each The context is 5 PFChang s Scottsdale AZ in all examples begincenter begintabular l l hline b lambda Example review for context hline hline 03 3 I love this location Great service great food and the best drinks in Scottsdale The staff is very friendly and always remembers u when we come inhline 03 5 Love love the food here I always go for lunch They have a great menu and they make it fresh to order Great place good service and nice staffhline 05 4 I love their chicken lettuce wraps and fried rice The service is good they are always so polite They have great happy hour specials and they have a lot of optionshline 07 3 Great place to go with friends They always make sure your dining experience was awesome hline 07 5 Still havent ordered an entree before but today we tried them once both of us love this restauranthline 09 4 AMAZING Food was awesome with excellent service Loved the lettuce wraps Great drinks and wine Cant wait to go back so soon hline endtabular labeltablecategories endcenter endtable subsubsectionMTurk study labelsecamt We created 20 jobs each with 100 questions and requested master workers in MTurk to complete the jobs We randomly generated each survey for the participants Each review had a 50 chance to be real or fake The fake ones further were chosen among six 6 categories of fake reviews Tablereftablecategories The restaurant and the city was given as contextual information to the participants Our aim was to use this survey to understand how well Englishspeakers react to different parametrizations of NMTFake reviews Tablereftableamtpop in Appendix summarizes the statistics for respondents in the survey All participants were native English speakers from America The base rate 50 was revealed to the participants prior to the study We first investigated overall detection of any NMTFake reviews 1006 fake reviews and 994 real reviews We found that the participants had big difficulties in detecting our fake reviews In average the reviews were detected with classaveraged emphFscore of only 56 with 53 Fscore for fake review detection and 59 Fscore for real review detection The results are very close to emphrandom detection where precision recall and Fscore would each be 50 Results are recorded in TablereftableMTurksuper Overall the fake review generation is very successful since human detection rate across categories is close to random begintablet captionEffectiveness of Mechanical Turkers in distinguishing humanwritten reviews from fake reviews generated by our NMT model all variants begincenter begintabular c c c c c hline multicolumn5cClassification report hline Review Type Precision Recall Fscore Support hline hline Human 55 63 59 994 NMTFake 57 50 53 1006 hline endtabular labeltableMTurksuper endcenter endtable We noticed some variation in the detection of different fake review categories The respondents in our MTurk survey had most difficulties recognizing reviews of category b03 lambda5 where true positive rate was 404 while the true negative rate of the real class was 627 The precision were 16 and 86 respectively The classaveraged Fscore is 476 which is close to random Detailed classification reports are shown in TablereftableMTurksub in Appendix Our MTurkstudy shows that emphour NMTFake reviews pose a significant threat to review systems since emphordinary native Englishspeakers have very big difficulties in separating real reviews from fake reviews We use the review category b03 lambda5 for future user tests in this paper since MTurk participants had most difficulties detecting these reviews We refer to this category as NMTFake in this paper sectionEvaluation graphicspath figures We evaluate our fake reviews by first comparing them statistically to previously proposed types of fake reviews and proceed with a user study with experienced participants We demonstrate the statistical difference to existing fake review types citeyao2017automatedmukherjee2013yelprayana2015collective by training classifiers to detect previous types and investigate classification performance subsectionReplication of stateoftheart model LSTM labelsecrepl Yao et al citeyao2017automated presented the current stateoftheart generative model for fake reviews The model is trained over the Yelp Challenge dataset using a twolayer characterbased LSTM model We requested the authors of citeyao2017automated for access to their LSTM model or a fake review dataset generated by their model Unfortunately they were not able to share either of these with us We therefore replicated their model as closely as we could based on their paper and email correspondencefootnoteWe are committed to sharing our code with bonafide researchers for the sake of reproducibility We used the same graphics card GeForce GTX and trained using the same framework torchRNN in lua We downloaded the reviews from Yelp Challenge and preprocessed the data to only contain printable ASCII characters and filtered out nonrestaurant reviews We trained the model for approximately 72 hours We postprocessed the reviews using the customization methodology described in citeyao2017automated and email correspondence We call fake reviews generated by this model LSTMFake reviews subsectionSimilarity to existing fake reviews labelsecautomated We now want to understand how NMTFake reviews compare to a LSTM fake reviews and b humangenerated fake reviews We do this by comparing the statistical similarity between these classes For a Figurereffiglstm we use the Yelp Challenge dataset We trained a classifier using 5000 random reviews from the Yelp Challenge dataset human and 5000 fake reviews generated by LSTMFake Yao et al citeyao2017automated found that character features are essential in identifying LSTMFake reviews Consequently we use character features ngrams up to 3 For b Figurereffigshillwe the Yelp Shills dataset combination of YelpZip citemukherjee2013yelp YelpNYC citemukherjee2013yelp YelpChi citerayana2015collective This dataset labels entries that are identified as fraudulent by Yelps filtering mechanism shill reviewsfootnoteNote that shill reviews are probably generated by human shills citezhao2017news The rest are treated as genuine reviews from human users genuine We use 100000 reviews from each category to train a classifier We use features from the commercial psychometric tool LIWC2015 citepennebaker2015development to generated features In both cases we use AdaBoost with 200 shallow decision trees for training For testing each classifier we use a held out test set of 1000 reviews from both classes in each case In addition we test 1000 NMTFake reviews Figuresreffiglstm andreffigshill show the results The classification threshold of 50 is marked with a dashed line beginfigure beginsubfigureb05columnwidth includegraphicswidthcolumnwidthfigureslstmpng captionHumanLSTM reviews labelfiglstm endsubfigure beginsubfigureb05columnwidth includegraphicswidthcolumnwidthfiguresdistributionshillpng captionGenuineShill reviews labelfigshill endsubfigure caption Histogram comparison of NMTFake reviews with LSTMFake reviews and humangenerated emphgenuine and emphshill reviews Figurereffiglstm shows that a classifier trained to distinguish human vs LSTMFake cannot distinguish human vs NMTFake reviews Figurereffigshill shows NMTFake reviews are more similar to emphgenuine reviews than emphshill reviews labelfigstatisticalsimilarity endfigure We can see that our new generated reviews do not share strong attributes with previous known categories of fake reviews If anything our fake reviews are more similar to genuine reviews than previous fake reviews We thus conjecture that our NMTFake fake reviews present a category of fake reviews that may go undetected on online review sites subsectionComparative user study labelseccomparison We wanted to evaluate the effectiveness of fake reviews againsttechsavvy users who understand and know to expect machinegenerated fake reviews We conducted a user study with 20 participants all with computer science education and at least one university degree Participant demographics are shown in Tablereftableamtpop in the Appendix Each participant first attended a training session where they were asked to label reviews fake and genuine and could later compare them to the correct answers we call these participants emphexperienced participants No personal data was collected during the user study Each person was given two randomly selected sets of 30 of reviews a total of 60 reviews per person with reviews containing 10 textendash 50 words each Each set contained 26 87 real reviews from Yelp and 4 13 machinegenerated reviews numbers chosen based on suspicious review prevalence on Yelpcitemukherjee2013yelprayana2015collective One set contained machinegenerated reviews from one of the two models NMT b03 lambda5 or LSTM and the other set reviews from the other in randomized order The number of fake reviews was revealed to each participant in the study description Each participant was requested to mark four 4 reviews as fake Each review targeted a real restaurant A screenshot of that restaurants Yelp page was shown to each participant prior to the study Each participant evaluated reviews for one specific randomly selected restaurant An example of the first page of the user study is shown in Figurereffigscreenshot in Appendix beginfigureht centering includegraphicswidth7columnwidthdetection2png captionViolin plots of detection rate in comparative study Mean and standard deviations for number of detected fakes are 08pm07 for NMTFake and 25pm10 for LSTMFake n20 A sample of random detection is shown as comparison labelfigaalto endfigure Figurereffigaalto shows the distribution of detected reviews of both types A hypothetical random detector is shown for comparison NMTFake reviews are significantly more difficult to detect for our experienced participants In average detection rate recall is 20 for NMTFake reviews compared to 61 for LSTMbased reviews The precision and Fscore is the same as the recall in our study since participants labeled 4 fakes in each set of 30 reviews citemurphy2012machine The distribution of the detection across participants is shown in Figurereffigaalto emphThe difference is statistically significant with confidence level 99 Welchs ttest We compared the detection rate of NMTFake reviews to a random detector and find that emphour participants detection rate of NMTFake reviews is not statistically different from random predictions with 95 confidence level Welchs ttest sectionDefenses labelsecdetection We developed an AdaBoostbased classifier to detect our new fake reviews consisting of 200 shallow decision trees depth 2 The features we used are recorded in Tablereftablefeaturesadaboost Appendix We used wordlevel features based on spaCytokenization citehonnibaljohnson2015EMNLP and constructed ngram representation of POStags and dependency tree tags We added readability features from NLTKcitebird2004nltk beginfigureht centering includegraphicswidth7columnwidthobfscorefair2png caption Adaboostbased classification of NMTFake and humanwritten reviews Effect of varying b and lambda in fake review generation The variant native speakers had most difficulties detecting is well detectable by AdaBoost 97 labelfigadaboostmatrixblambda endfigure Figurereffigadaboostmatrixblambda shows our AdaBoost classifiers classaveraged Fscore at detecting different kind of fake reviews The classifier is very effective in detecting reviews that humans have difficulties detecting For example the fake reviews MTurk users had most difficulty detecting b03 lambda5 are detected with an excellent 97 Fscore The most important features for the classification were counts for frequently occurring words in fake reviews such as punctuation pronouns articles as well as the readability feature Automated Readability Index We thus conclude that while NMTFake reviews are difficult to detect for humans they can be well detected with the right tools sectionRelated Work Kumar and Shahcitekumar2018false survey and categorize false information research Automatically generated fake reviews are a form of emphopinionbased false information where the creator of the review may influence readers opinions or decisions Yao et al citeyao2017automated presented their study on machinegenerated fake reviews Contrary to us they investigated characterlevel language models without specifying a specific context before generation We leverage existing NMT tools to encode a specific context to the restaurant before generating reviews Supporting our study Everett et alciteEverett2016Automated found that security researchers were less likely to be fooled by Markov chaingenerated Reddit comments compared to ordinary Internet users Diversification of NMT model outputs has been studied in citeli2016diversity The authors proposed the use of a penalty to commonly occurring sentences emphngrams in order to emphasize maximum mutual informationbased generation The authors investigated the use of NMT models in chatbot systems We found that unigram penalties to random tokens Algorithmrefalgaug was easy to implement and produced sufficiently diverse responses section Discussion and Future Work paragraphWhat makes NMTFake reviews difficult to detect First NMT models allow the encoding of a relevant context for each review which narrows down the possible choices of words that the model has to choose from Our NMT model had a perplexity of approximately 25 while the model of citeyao2017automated had a perplexity of approximately 90 footnotePersonal communication with the authors Second the beam search in NMT models narrows down choices to naturallooking sentences Third we observed that the NMT model produced emphbetter structure in the generated sentences ie a more coherent story paragraphCost of generating reviews With our setup generating one review took less than one second The cost of generation stems mainly from the overnight training Assuming an electricity cost of 16 cents kWh California and 8 hours of training training the NMT model requires approximately 130 USD This is a 90 reduction in time compared to the stateoftheart citeyao2017automated Furthermore it is possible to generate both positive and negative reviews with the same model paragraphEase of customization We experimented with inserting specific words into the text by increasing their log likelihoods in the beam search We noticed that the success depended on the prevalence of the word in the training set For example adding a 5 to emphMike in the loglikelihood resulted in approximately 10 prevalence of this word in the reviews An attacker can therefore easily insert specific keywords to reviews which can increase evasion probability paragraphEase of testing Our diversification scheme is applicable during emphgeneration phase and does not affect the training setup of the network in any way Once the NMT model is obtained it is easy to obtain several different variants of NMTFake reviews by varying parameters b and lambda paragraphLanguages The generation methodology is not perse languagedependent The requirement for successful generation is that sufficiently much data exists in the targeted language However our language model modifications require some knowledge of that target languages grammar to produce highquality reviews paragraphGeneralizability of detection techniques Currently fake reviews are not universally detectable Our results highlight that it is difficult to claim detection performance on unseen types of fake reviews Sectionrefsecautomated We see this an open problem that deserves more attention in fake reviews research paragraphGeneralizability to other types of datasets Our technique can be applied to any dataset as long as there is sufficient training data for the NMT model We used approximately 29 million reviews for this work sectionConclusion In this paper we showed that neural machine translation models can be used to generate fake reviews that are very effective in deceiving even experienced techsavvy users This supports anecdotal evidence citenational2017commission Our technique is more effective than stateoftheart citeyao2017automated We conclude that machineaided fake review detection is necessary since human users are ineffective in identifying fake reviews We also showed that detectors trained using one type of fake reviews are not effective in identifying other types of fake reviews Robust detection of fake reviews is thus still an open problem sectionAcknowledgments We thank Tommi Grondahl for assistance in planning user studies and the participants of the user study for their time and feedback We also thank Luiza Sayfullina for comments that improved the manuscript We thank the authors of citeyao2017automated for answering questions about their work bibliographystylesplncs beginthebibliography10 bibitemyao2017automated Yao Y Viswanath B Cryan J Zheng H Zhao BY newblock Automated crowdturfing attacks and defenses in online review systems newblock In Proceedings of the 2017 ACM SIGSAC Conference on Computer and Communications Security ACM 2017 bibitemmurphy2012machine Murphy K newblock Machine learning a probabilistic approach newblock Massachusetts Institute of Technology 2012 bibitemchallenge2013yelp Yelp newblock Yelp Challenge Dataset 2013 bibitemmukherjee2013yelp Mukherjee A Venkataraman V Liu B Glance N newblock What yelp fake review filter might be doing newblock In Seventh International AAAI Conference on Weblogs and Social Media ICWSM 2013 bibitemrayana2015collective Rayana S Akoglu L newblock Collective opinion spam detection Bridging review networks and metadata newblock In Proceedings of the 21th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining bibitemo2008user OConnor P newblock Usergenerated content and travel A case study on Tripadvisorcom newblock Information and communication technologies in tourism 2008 2008 bibitemluca2010reviews Luca M newblock Reviews Reputation and Revenue The Case of Yelp com newblock Harvard Business School 2010 bibitemwang2012serf Wang G Wilson C Zhao X Zhu Y Mohanlal M Zheng H Zhao BY newblock Serf and turf crowdturfing for fun and profit newblock In Proceedings of the 21st international conference on World Wide Web WWW ACM 2012 bibitemrinta2017understanding RintaKahila T Soliman W newblock Understanding crowdturfing The different ethical logics behind the clandestine industry of deception newblock In ECIS 2017 Proceedings of the 25th European Conference on Information Systems 2017 bibitemluca2016fake Luca M Zervas G newblock Fake it till you make it Reputation competition and yelp review fraud newblock Management Science 2016 bibitemnational2017commission National Literacy Trust newblock Commission on fake news and the teaching of critical literacy skills in schools URL urlhttpsliteracytrustorgukpolicyandcampaignsallpartyparliamentarygroupliteracyfakenews bibitemjurafsky2014speech Jurafsky D Martin JH newblock Speech and language processing Volume3 newblock Pearson London 2014 bibitemkingma2014adam Kingma DP Ba J newblock Adam A method for stochastic optimization newblock arXiv preprint arXiv14126980 2014 bibitemcho2014learning Cho K van Merrienboer B Gulcehre C Bahdanau D Bougares F Schwenk H Bengio Y newblock Learning phrase representations using rnn encoderdecoder for statistical machine translation newblock In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing EMNLP 2014 bibitemklein2017opennmt Klein G Kim Y Deng Y Senellart J Rush A newblock Opennmt Opensource toolkit for neural machine translation newblock Proceedings of ACL System Demonstrations 2017 bibitemwu2016google Wu Y Schuster M Chen Z Le QV Norouzi M Macherey W Krikun M Cao Y Gao Q Macherey K etal newblock Googles neural machine translation system Bridging the gap between human and machine translation newblock arXiv preprint arXiv160908144 2016 bibitemmei2017coherent Mei H Bansal M Walter MR newblock Coherent dialogue with attentionbased language models newblock In AAAI 2017 32523258 bibitemli2016diversity Li J Galley M Brockett C Gao J Dolan B newblock A diversitypromoting objective function for neural conversation models newblock In Proceedings of NAACLHLT 2016 bibitemrubin2006assessing Rubin VL Liddy ED newblock Assessing credibility of weblogs newblock In AAAI Spring Symposium Computational Approaches to Analyzing Weblogs 2006 bibitemzhao2017news newscomau newblock The potential of AI generated crowdturfing could undermine online reviews and dramatically erode public trust URL urlhttpwwwnewscomautechnologyonlinesecuritythepotentialofaigeneratedcrowdturfingcouldundermineonlinereviewsanddramaticallyerodepublictrustnewsstorye1c84ad909b586f8a08238d5f80b6982 bibitempennebaker2015development Pennebaker JW Boyd RL Jordan K Blackburn K newblock The development and psychometric properties of LIWC2015 newblock Technical report 2015 bibitemhonnibaljohnson2015EMNLP Honnibal M Johnson M newblock An improved nonmonotonic transition system for dependency parsing newblock In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing EMNLP ACM 2015 bibitembird2004nltk Bird S Loper E newblock NLTK the natural language toolkit newblock In Proceedings of the ACL 2004 on Interactive poster and demonstration sessions Association for Computational Linguistics 2004 bibitemkumar2018false Kumar S Shah N newblock False information on web and social media A survey newblock arXiv preprint arXiv180408559 2018 bibitemEverett2016Automated Everett RM Nurse JRC Erola A newblock The anatomy of online deception What makes automated text convincing newblock In Proceedings of the 31st Annual ACM Symposium on Applied Computing SAC 16 ACM 2016 endthebibliography sectionAppendix We present basic demographics of our MTurk study and the comparative study with experienced users in Tablereftableamtpop begintable captionUser study statistics begincenter begintabular l c c hline Quality Mechanical Turk users Experienced users hline Native English Speaker Yes 20 Yes 1 No 19 Fluent in English Yes 20 Yes 20 Age 2140 17 4160 3 2125 8 2630 7 3135 4 4145 1 Gender Male 14 Female 6 Male 17 Female 3 Highest Education High School 10 Bachelor 10 Bachelor 9 Master 6 PhD 5 hline endtabular labeltableamtpop endcenter endtable TablereftableopenNMTpycommands shows a listing of the openNMTpy commands we used to create our NMT model and to generate fake reviews begintablet captionListing of used openNMTpy commands begincenter begintabular l l hline Phase Bash command hline Preprocessing beginlstlistinglanguagebash python preprocesspy trainsrc contexttraintxt traintgt reviewstraintxt validsrc contextvaltxt validtgt reviewsvaltxt savedata model lower tgtwordsminfrequency 10 endlstlisting Training beginlstlistinglanguagebash python trainpy data model savemodel model epochs 8 gpuid 0 learningratedecay 05 optim adam learningrate 0001 startdecayat 3endlstlisting Generation beginlstlistinglanguagebash python translatepy model modelacc3554ppl2568e8pt src contexttsttxt output prede8txt replaceunk verbose maxlength 50 gpu 0 endlstlisting hline endtabular labeltableopenNMTpycommands endcenter endtable TablereftableMTurksub shows the classification performance of Amazon Mechanical Turkers separated across different categories of NMTFake reviews The category with best performance b03 lambda5 is denoted as NMTFake begintableb captionMTurk study subclass classification reports Classes are imbalanced in ratio 16 Random predictions are pmathrmhuman 86 and pmathrmmachine 14 with rmathrmhuman rmathrmmachine 50 Classaveraged Fscores for random predictions are 42 begincenter begintabular c c c c c hline b03 lambda 3 Precision Recall Fscore Support hline Human 89 63 73 994 NMTFake 15 45 22 146 hline hline b03 lambda 5 Precision Recall Fscore Support hline Human 86 63 73 994 NMTFake 16 40 23 171 hline hline b05 lambda 4 Precision Recall Fscore Support hline Human 88 63 73 994 NMTFake 21 55 30 181 hline hline b07 lambda 3 Precision Recall Fscore Support hline Human 88 63 73 994 NMTFake 19 50 27 170 hline hline b07 lambda 5 Precision Recall Fscore Support hline Human 89 63 74 994 NMTFake 21 57 31 174 hline hline b09 lambda 4 Precision Recall Fscore Support hline Human 88 63 73 994 NMTFake 18 50 27 164 hline endtabular labeltableMTurksub endcenter endtable Figurereffigscreenshot shows screenshots of the first two pages of our user study with experienced participants beginfigureht centering includegraphicswidth1columnwidthfiguresscreenshot73png caption Screenshots of the first two pages in the user study Example 1 is a NMTFake review the rest are humanwritten labelfigscreenshot endfigure Tablereftablefeaturesadaboost shows the features used to detect NMTFake reviews using the AdaBoost classifier begintable captionFeatures used in NMTFake review detector begincenter begintabular l c hline Feature type Number of features hline hline Readability features 13 hline Unique POS tags 20 hline Word unigrams 22831 hline 1234grams of simple partofspeech tags 54240 hline 123grams of detailed partofspeech tags 112944 hline 123grams of syntactic dependency tags 93195 hline endtabular labeltablefeaturesadaboost endcenter endtable enddocument </s>
<s>[INST] Saliency Maps Generation for Automatic Text Summarization [/INST] Ever since the LIME algorithm BIBREF0 explanation techniques focusing on finding the importance of input features in regard of a specific prediction have soared and we now have many ways of finding saliency maps also called heatmaps because of the way we like to visualize them We are interested in this paper by the use of such a technique in an extreme task that highlights questions about the validity and evaluation of the approach We would like to first set the vocabulary we will use We agree that saliency maps are not explanations in themselves and that they are more similar to attribution which is only one part of the human explanation process BIBREF1 We will prefer to call this importance mapping of the input an attribution rather than an explanation We will talk about the importance of the input relevance score in regard to the models computation and not make allusion to any human understanding of the model as a result There exist multiple ways to generate saliency maps over the input for nonlinear classifiers BIBREF2 BIBREF3 BIBREF4 We refer the reader to BIBREF5 for a survey of explainable AI in general We use in this paper LayerWise Relevance Propagation LRP BIBREF2 which aims at redistributing the value of the classifying function on the input to obtain the importance attribution It was first created to explain the classification of neural networks on image recognition tasks It was later successfully applied to text using convolutional neural networks CNN BIBREF6 and then LongShort Term Memory LSTM networks for sentiment analysis BIBREF7 Our goal in this paper is to test the limits of the use of such a technique for more complex tasks where the notion of input importance might not be as simple as in topic classification or sentiment analysis We changed from a classification task to a generative task and chose a more complex one than text translation in which we can easily find a word to word correspondenceimportance between input and output We chose text summarization We consider abstractive and informative text summarization meaning that we write a summary in our own words and retain the important information of the original text We refer the reader to BIBREF8 for more details on the task and the different variants that exist Since the success of deep sequencetosequence models for text translation BIBREF9 the same approaches have been applied to text summarization tasks BIBREF10 BIBREF11 BIBREF12 which use architectures on which we can apply LRP We obtain one saliency map for each word in the generated summaries supposed to represent the use of the input features for each element of the output sequence We observe that all the saliency maps for a text are nearly identical and decorrelated with the attention distribution We propose a way to check their validity by creating what could be seen as a counterfactual experiment from a synthesis of the saliency maps using the same technique as in Arras et al Arras2017 We show that in some but not all cases they help identify the important input features and that we need to rigorously check importance attributions before trusting them regardless of whether or not the mapping makes sense to us We finally argue that in the process of identifying the important input features verifying the saliency maps is as important as the generation step if not more We present in this section the baseline model from See et al See2017 trained on the CNNDaily Mail dataset We reproduce the results from See et al See2017 to then apply LRP on it The CNNDaily mail dataset BIBREF12 is a text summarization dataset adapted from the Deepmind questionanswering dataset BIBREF13 It contains around three hundred thousand news articles coupled with summaries of about three sentences These summaries are in fact highlights of the articles provided by the media themselves Articles have an average length of 780 words and the summaries of 50 words We had 287 000 training pairs and 11 500 test pairs Similarly to See et al See2017 we limit during training and prediction the input text to 400 words and generate summaries of 200 words We pad the shorter texts using an UNKNOWN token and truncate the longer texts We embed the texts and summaries using a vocabulary of size 50 000 thus recreating the same parameters as See et al See2017 The baseline model is a deep sequencetosequence encoderdecoder model with attention The encoder is a bidirectional LongShort Term MemoryLSTM cell BIBREF14 and the decoder a single LSTM cell with attention mechanism The attention mechanism is computed as in BIBREF9 and we use a greedy search for decoding We train endtoend including the words embeddings The embedding size used is of 128 and the hidden state size of the LSTM cells is of 254 We train the 21 350 992 parameters of the network for about 60 epochs until we achieve results that are qualitatively equivalent to the results of See et al See2017 We obtain summaries that are broadly relevant to the text but do not match the target summaries very well We observe the same problems such as wrong reproduction of factual details replacing rare words with more common alternatives or repeating nonsense after the third sentence We can see in Figure 1 an example of summary obtained compared to the target one The summaries we generate are far from being valid summaries of the information in the texts but are sufficient to look at the attribution that LRP will give us They pick up the general subject of the original text We present in this section the LayerWise Relevance Propagation LRP BIBREF2 technique that we used to attribute importance to the input features together with how we adapted it to our model and how we generated the saliency maps LRP redistributes the output of the model from the output layer to the input by transmitting information backwards through the layers We call this propagated backwards importance the relevance LRP has the particularity to attribute negative and positive relevance a positive relevance is supposed to represent evidence that led to the classifiers result while negative relevance represents evidence that participated negatively in the prediction We initialize the relevance of the output layer to the value of the predicted class before softmax and we then describe locally the propagation backwards of the relevance from layer to layer For normal neural network layers we use the form of LRP with epsilon stabilizer BIBREF2 We write down Rileftarrow jl l1 the relevance received by the neuron i of layer l from the neuron j of layer l1 beginsplit
Rileftarrow jl l1 dfracwirightarrow jll1textbf zli dfracepsilon textrm signtextbf zl1j textbf bl1jDltextbf zl1j epsilon textrm signtextbf zl1j Rjl1
endsplit Eq 7 where wirightarrow jll1 is the networks weight parameter set during training textbf bl1j is the bias for neuron j of layer l1 textbf zli is the activation of neuron i on layer l epsilon is the stabilizing term set to 000001 and Dl is the dimension of the l th layer The relevance of a neuron is then computed as the sum of the relevance he received from the above layers For LSTM cells we use the method from Arras et alArras2017 to solve the problem posed by the elementwise multiplications of vectors Arras et al noted that when such computation happened inside an LSTM cell it always involved a gate vector and another vector containing information The gate vector containing only value between 0 and 1 is essentially filtering the second vector to allow the passing of relevant information Considering this when we propagate relevance through an elementwise multiplication operation we give all the upperlayers relevance to the information vector and none to the gate vector We use the same method to transmit relevance through the attention mechanism back to the encoder because Bahdanaus attention BIBREF9 uses elementwise multiplications as well We depict in Figure 2 the transmission endtoend from the output layer to the input through the decoder attention mechanism and then the bidirectional encoder We then sum up the relevance on the word embedding to get the tokens relevance as Arras et al Arras2017 The way we generate saliency maps differs a bit from the usual context in which LRP is used as we essentially dont have one classification but 200 one for each word in the summary We generate a relevance attribution for the 50 first words of the generated summary as after this point they often repeat themselves This means that for each text we obtain 50 different saliency maps each one supposed to represent the relevance of the input for a specific generated word in the summary In this section we present our results from extracting attributions from the sequencetosequence model trained for abstractive text summarization We first have to discuss the difference between the 50 different saliency maps we obtain and then we propose a protocol to validate the mappings The first observation that is made is that for one text the 50 saliency maps are almost identical Indeed each mapping highlights mainly the same input words with only slight variations of importance We can see in Figure 3 an example of two nearly identical attributions for two distant and unrelated words of the summary The saliency map generated using LRP is also uncorrelated with the attention distribution that participated in the generation of the output word The attention distribution changes drastically between the words in the generated summary while not impacting significantly the attribution over the input text We deleted in an experiment the relevance propagated through the attention mechanism to the encoder and didnt observe much changes in the saliency map It can be seen as evidence that using the attention distribution as an explanation of the prediction can be misleading It is not the only information received by the decoder and the importance it allocates to this attention state might be very low What seems to happen in this application is that most of the information used is transmitted from the encoder to the decoder and the attention mechanism at each decoding step just changes marginally how it is used Quantifying the difference between attention distribution and saliency map across multiple tasks is a possible future work The second observation we can make is that the saliency map doesnt seem to highlight the right things in the input for the summary it generates The saliency maps on Figure 3 correspond to the summary from Figure 1 and we dont see the word video highlighted in the input text which seems to be important for the output This allows us to question how good the saliency maps are in the sense that we question how well they actually represent the networks use of the input features We will call that truthfulness of the attribution in regard to the computation meaning that an attribution is truthful in regard to the computation if it actually highlights the important input features that the network attended to during prediction We proceed to measure the truthfulness of the attributions by validating them quantitatively We propose to validate the saliency maps in a similar way as Arras et al Arras2017 by incrementally deleting important words from the input text and observe the change in the resulting generated summaries We first define what important and unimportant input words mean across the 50 saliency maps per texts Relevance transmitted by LRP being positive or negative we average the absolute value of the relevance across the saliency maps to obtain one ranking of the most relevant words The idea is that input words with negative relevance have an impact on the resulting generated word even if it is not participating positively while a word with a relevance close to zero should not be important at all We did however also try with different methods like averaging the raw relevance or averaging a scaled absolute value where negative relevance is scaled down by a constant factor The absolute value average seemed to deliver the best results We delete incrementally the important words words with the highest average in the input and compared it to the control experiment that consists of deleting the least important word and compare the degradation of the resulting summaries We obtain mitigated results for some texts we observe a quick degradation when deleting important words which are not observed when deleting unimportant words see Figure 4 but for other test examples we dont observe a significant difference between the two settings see Figure 5 One might argue that the second summary in Figure 5 is better than the first one as it makes better sentences but as the model generates inaccurate summaries we do not wish to make such a statement This however allows us to say that the attribution generated for the text at the origin of the summaries in Figure 4 are truthful in regard to the networks computation and we may use it for further studies of the example whereas for the text at the origin of Figure 5 we shouldnt draw any further conclusions from the attribution generated One interesting point is that one saliency map didnt look better than the other meaning that there is no apparent way of determining their truthfulness in regard of the computation without doing a quantitative validation This brings us to believe that even in simpler tasks the saliency maps might make sense to us for example highlighting the animal in an image classification task without actually representing what the network really attended too or in what way We defined without saying it the counterfactual case in our experiment Would the important words in the input be deleted we would have a different summary Such counterfactuals are however more difficult to define for image classification for example where it could be applying a mask over an image or just filtering a colour or a pattern We believe that defining a counterfactual and testing it allows us to measure and evaluate the truthfulness of the attributions and thus weight how much we can trust them In this work we have implemented and applied LRP to a sequencetosequence model trained on a more complex task than usual text summarization We used previous work to solve the difficulties posed by LRP in LSTM cells and adapted the same technique for Bahdanau et al Bahdanau2014 attention mechanism We observed a peculiar behaviour of the saliency maps for the words in the output summary they are almost all identical and seem uncorrelated with the attention distribution We then proceeded to validate our attributions by averaging the absolute value of the relevance across the saliency maps We obtain a ranking of the word from the most important to the least important and proceeded to delete one or another We showed that in some cases the saliency maps are truthful to the networks computation meaning that they do highlight the input features that the network focused on But we also showed that in some cases the saliency maps seem to not capture the important input features This brought us to discuss the fact that these attributions are not sufficient by themselves and that we need to define the counterfactual case and test it to measure how truthful the saliency maps are Future work would look into the saliency maps generated by applying LRP to pointergenerator networks and compare to our current results as well as mathematically justifying the average that we did when validating our saliency maps Some additional work is also needed on the validation of the saliency maps with counterfactual tests The exploitation and evaluation of saliency map are a very important step and should not be overlooked </s>
<s>[INST] Probabilistic Bias Mitigation in Word Embeddings [/INST] Word embeddings or vector representations of words are an important component of Natural Language Processing NLP models and necessary for many downstream tasks However word embeddings including embeddings commonly deployed for public use have been shown to exhibit unwanted societal stereotypes and biases raising concerns about disparate impact on axes of gender race ethnicity and religion BIBREF0 BIBREF1 The impact of this bias has manifested in a range of downstream tasks ranging from autocomplete suggestions BIBREF2 to advertisement delivery BIBREF3 increasing the likelihood of amplifying harmful biases through the use of these models The most wellestablished method thus far for mitigating bias relies on projecting target words onto a bias subspace such as a gender subspace and subtracting out the difference between the resulting distances BIBREF0 On the other hand the most popular metric for measuring bias is the WEAT statistic BIBREF1 which compares the cosine similarities between groups of words However WEAT has been recently shown to overestimate bias as a result of implicitly relying on similar frequencies for the target words BIBREF4 and BIBREF5 demonstrated that evidence of bias can still be recovered after geometric bias mitigation by examining the neighborhood of a target word among sociallybiased words In response to this we propose an alternative framework for bias mitigation in word embeddings that approaches this problem from a probabilistic perspective The motivation for this approach is twofold First most popular word embedding algorithms are probabilistic at their core ie they are trained explicitly or implicitly BIBREF6 to minimize some form of word cooccurrence probabilities Thus we argue that a framework for measuring and treating bias in these embeddings should take into account in addition to their geometric aspect their probabilistic nature too On the other hand the issue of bias has also been approached albeit in different contexts in the fairness literature where various intuitive notions of equity such as equalized odds have been formalized through probabilistic criteria By considering analogous criteria for the word embedding setting we seek to draw connections between these two bodies of work We present experiments on various bias mitigation benchmarks and show that our framework is comparable to stateoftheart alternatives according to measures of geometric bias mitigation and that it performs far better according to measures of neighborhood bias For fair comparison we focus on mitigating a binary gender bias in pretrained word embeddings using SGNS skipgram with negativesampling though we note that this framework and methods could be extended to other types of bias and word embedding algorithms Geometric bias mitigation uses the cosine distances between words to both measure and remove gender bias BIBREF0 This method implicitly defines bias as a geometric asymmetry between words when projected onto a subspace such as the gender subspace constructed from a set of gender pairs such as mathcal P lbrace heshemanwomankingqueenrbrace The projection of a vector v onto B the subspace is defined by vB sum j1k v cdot bj bj where a subspace B is defined by k orthogonal unit vectors B b1bk The WEAT statistic BIBREF1 demonstrates the presence of biases in word embeddings with an effect size defined as the mean test statistic across the two word sets Where s the test statistic is defined as swAB meana in A coswa meanb in B coswa and XYA and B are groups of words for which the association is measured Possible values range from 2 to 2 depending on the association of the words groups and a value of zero indicates X and Y are equally associated with A and B See BIBREF4 for further details on WEAT The RIPA relational inner product association metric was developed as an alternative to WEAT with the critique that WEAT is likely to overestimate the bias of a target attribute BIBREF4 The RIPA metric formalizes the measure of bias used in geometric bias mitigation as the inner product association of a word vector v with respect to a relation vector b The relation vector is constructed from the first principal component of the differences between gender word pairs We report the absolute value of the RIPA metric as the value can be positive or negative according to the direction of the bias A value of zero indicates a lack of bias and the value is bound by ww The neighborhood bias metric proposed by BIBREF5 quantifies bias as the proportion of male sociallybiased words among the k nearest sociallybiased male and female neighboring words whereby biased words are obtained by projecting neutral words onto a gender relation vector As we only examine the target word among the 1000 most sociallybiased words in the vocabulary 500 male and 500 female a words bias is measured as the ratio of its neighborhood of sociallybiased male and sociallybiased female words so that a value of 05 in this metric would indicate a perfectly unbiased word and values closer to 0 and 1 indicate stronger bias Our objective here is to extend and complement the geometric notions of word embedding bias described in the previous section with an alternative probabilistic approach Intuitively we seek a notion of equality akin to that of demographic parity in the fairness literature which requires that a decision or outcome be independent of a protected attribute such as gender BIBREF7 Similarly when considering a probabilistic definition of unbiased in word embeddings we can consider the conditional probabilities of word pairs ensuring for example that pdoctorman approx pdoctorwoman and can extend this probabilistic framework to include the neighborhood of a target word addressing the potential pitfalls of geometric bias mitigation Conveniently most word embedding frameworks allow for immediate computation of the conditional probabilities Pwc Here we focus our attention on the SkipGram method with Negative Sampling SGNS of BIBREF8 although our framework can be equivalently instantiated for most other popular embedding methods owing to their core similarities BIBREF6 BIBREF9 Leveraging this probabilistic nature we construct a bias mitigation method in two steps and examine each step as an independent method as well as the resulting composite method This component of our bias mitigation framework seeks to enforce that the probability of prediction or outcome cannot depend on a protected class such as gender We can formalize this intuitive goal through a loss function that penalizes the discrepancy between the conditional probabilities of a target word ie one that should not be affected by the protected attribute conditioned on two words describing the protected attribute eg man and woman in the case of gender That is for every target word we seek to minimize where mathcal P lbrace heshemanwomankingqueen dots rbrace is a set of word pairs characterizing the protected attribute akin to that used in previous work BIBREF0 At this point the specific form of the objective will depend on the type of word embeddings used For our expample of SGNS recall that this algorithm models the conditional probability of a target word given a context word as a function of the inner product of their representations Though an exact method for calculating the conditional probability includes summing over conditional probability of all the words in the vocabulary we can use the estimation of log conditional probability proposed by BIBREF8 ie log pwOwI approx log sigma vprime woT vwI sum i1k log sigma vprime wiT vwI Based on observations by BIBREF5 we extend our method to consider the composition of the neighborhood of sociallygendered words of a target word We note that bias in a word embedding depends not only on the relationship between a target word and explicitly gendered words like man and woman but also between a target word and sociallybiased male or female words Bolukbasi et al BIBREF0 proposed a method for eliminating this kind of indirect bias through geometric bias mitigation but it is shown to be ineffective by the neighborhood metric BIBREF5 Instead we extend our method of bias mitigation to account for this neighborhood effect Specifically we examine the conditional probabilities of a target word given the k2 nearest neighbors from the male sociallybiased words as well as given the k2 female sociallybiased words in sorted order from smallest to largest The groups of sociallybiased words are constructed as described in the neighborhood metric If the word is unbiased according to the neighborhood metric these probabilities should be comparable We then use the following as our loss function where m and f represent the male and female neighbors sorted by distance to the target word t we use L1 distance We evaluate our framework on fastText embeddings trained on Wikipedia 2017 UMBC webbase corpus and statmtorg news dataset 16B tokens BIBREF11 For simplicity only the first 22000 words are used in all embeddings though preliminary results indicate the findings extend to the full corpus For our novel methods of mitigating bias a shallow neural network is used to adjust the embedding The single layer of the model is an embedding layer with weights initialized to those of the original embedding For the composite method these weights are initialized to those of the embedding after probabilistic bias mitigation A batch of word indices is fed into the model which are then embedded and for which a loss value is calculated allowing backpropagation to adjust the embeddings For each of the models a fixed number of iterations is used to prevent overfitting which can eventually hurt performance on the embedding benchmarks See Figure FIGREF12 We evaluated the embedding after 1000 iterations and stopped training if performance on a benchmark decreased significantly We construct a list of candidate words to debias taken from the words used in the WEAT gender bias statistics Words in this list should be gender neutral and are related to the topics of career arts science math family and professions see appendix We note that this list can easily be expanded to include a greater proportion of words in the corpus For example BIBREF4 suggested a method for identifying inappropriately gendered words using unsupervised learning We compare this method of bias mitigation with the no bias mitigation Orig geometric bias mitigation Geo the two pieces of our method alone Prob and KNN and the composite method KNNProb We note that the composite method performs reasonably well according the the RIPA metric and much better than traditional geometric bias mitigation according to the neighborhood metric without significant performance loss according to the accepted benchmarks To our knowledge this is the first bias mitigation method to perform reasonably both on both metrics We proposed a simple method of bias mitigation based on this probabilistic notions of fairness and showed that it leads to promising results in various benchmark bias mitigation tasks Future work should include considering a more rigorous definition and nonbinary of bias and experimenting with various embedding algorithms and network architectures The authors would like to thank Tommi Jaakkola for stimulating discussions during the initial stages of this work For Equation 4 as described in the original work in regards to the k sample words wi is drawn from the corpus using the Unigram distribution raised to the 34 power For reference the most male sociallybiased words include words such asjohn jr mlb dick nfl cfl sgt abbot halfback jock mike josephwhile the most female sociallybiased words include words such asfeminine marital tatiana pregnancy eva pageant distress cristina ida beauty sexualityfertility accountant acquaintance actor actress administrator adventurer advocate aide alderman ambassador analyst anthropologist archaeologist archbishop architect artist assassin astronaut astronomer athlete attorney author baker banker barber baron barrister bartender biologist bishop bodyguard boss boxer broadcaster broker businessman butcher butler captain caretaker carpenter cartoonist cellist chancellor chaplain character chef chemist choreographer cinematographer citizen cleric clerk coach collector colonel columnist comedian comic commander commentator commissioner composer conductor confesses congressman constable consultant cop correspondent counselor critic crusader curator dad dancer dean dentist deputy detective diplomat director doctor drummer economist editor educator employee entertainer entrepreneur envoy evangelist farmer filmmaker financier fisherman footballer foreman gangster gardener geologist goalkeeper guitarist headmaster historian hooker illustrator industrialist inspector instructor inventor investigator journalist judge jurist landlord lawyer lecturer legislator librarian lieutenant lyricist maestro magician magistrate maid manager marshal mathematician mechanic midfielder minister missionary monk musician nanny narrator naturalist novelist nun nurse observer officer organist painter pastor performer philanthropist philosopher photographer physician physicist pianist planner playwright poet policeman politician preacher president priest principal prisoner professor programmer promoter proprietor prosecutor protagonist provost psychiatrist psychologist rabbi ranger researcher sailor saint salesman saxophonist scholar scientist screenwriter sculptor secretary senator sergeant servant singer skipper sociologist soldier solicitor soloist sportsman statesman steward student substitute superintendent surgeon surveyor swimmer teacher technician teenager therapist trader treasurer trooper trumpeter tutor tycoon violinist vocalist waiter waitress warden warrior worker wrestler writer Words used for WEAT statistic consisting of baseline bias tests and gender bias tests in the format X vs Y A vs B Flowers vs Insects Pleasant vs Unpleasant X aster clover hyacinth marigold poppy azalea crocus iris orchid rose bluebell daffodil lilac pansy tulip buttercup daisy lily peony violet carnation gladiola magnolia petunia zinnia Y ant caterpillar flea locust spider bedbug centipede fly maggot tarantula bee cockroach gnat mosquito termite beetle cricket hornet moth wasp blackfly dragonfly horsefly roach weevil A caress freedom health love peace cheer friend heaven loyal pleasure diamond gentle honest lucky rainbow diploma gift honor miracle sunrise family happy laughter paradise vacation B abuse crash filth murder sickness accident death grief poison stink assault disaster hatred pollute tragedy divorce jail poverty ugly cancer kill rotten vomit agony prison Instruments vs Weapons Pleasant vs Unpleasant X bagpipe cello guitar lute trombone banjo clarinet harmonica mandolin trumpet bassoon drum harp oboe tuba bell fiddle harpsichord piano viola bongo flute horn saxophone violin Y arrow club gun missile spear ax dagger harpoon pistol sword blade dynamite hatchet rifle tank bomb firearm knife shotgun teargas cannon grenade mace slingshot whip A caress freedom health love peace cheer friend heaven loyal pleasure diamond gentle honest lucky rainbow diploma gift honor miracle sunrise family happy laughter paradise vacation B abuse crash filth murder sickness accident death grief poison stink assault disaster hatred pollute tragedy divorce jail poverty ugly cancer kill rotten vomit agony prison Male vs Female Career vs Family X brother father uncle grandfather son he his him man himself men husband boy uncle nephew boyfriend king actor Y sister mother aunt grandmother daughter she hers her woman herself women wife aunt niece girlfriend queen actress A executive management professional corporation salary office business career industry company promotion profession CEO manager coworker entrepreneur B home parents children family cousins marriage wedding relatives grandparents grandchildren nurture child toddler infant teenager Math vs Art Male vs Female X math algebra geometry calculus equations computation numbers addition trigonometry arithmetic logic proofs multiplication mathematics Y poetry art Shakespeare dance literature novel symphony drama orchestra music ballet arts creative sculpture A brother father uncle grandfather son he his him man himself men husband boy uncle nephew boyfriend king actor B sister mother aunt grandmother daughter she hers her woman herself women wife aunt niece girlfriend queen actress Science vs Art Male8 vs Female8 Xscience technology physics chemistry Einstein NASA experiment astronomy biology aeronautics mechanics thermodynamics Y poetry art Shakespeare dance literature novel symphony drama orchestra music ballet arts creative sculpture A brother father uncle grandfather son he his him man himself men husband boy uncle nephew boyfriend B sister mother aunt grandmother daughter she hers her woman herself women wife aunt niece girlfriend </s>
<s>[INST] Massive vs. Curated Word Embeddings for Low-Resourced Languages. The Case of Yor\`ub\'a and Twi [/INST] In recent years word embeddings BIBREF0 BIBREF1 BIBREF2 have been proven to be very useful for training downstream natural language processing NLP tasks Moreover contextualized embeddings BIBREF3 BIBREF4 have been shown to further improve the performance of NLP tasks such as named entity recognition question answering or text classification when used as word features because they are able to resolve ambiguities of word representations when they appear in different contexts Different deep learning architectures such as multilingual BERT BIBREF4 LASER BIBREF5 and XLM BIBREF6 have proved successful in the multilingual setting All these architectures learn the semantic representations from unannotated text making them cheap given the availability of texts in online multilingual resources such as Wikipedia However the evaluation of such resources is usually done for the highresourced languages where one has a smorgasbord of tasks and test sets to evaluate on This is the bestcase scenario languages with tones of data for training that generate highquality models For lowresourced languages the evaluation is more difficult and therefore normally ignored simply because of the lack of resources In these cases training data is scarce and the assumption that the capability of deep learning architectures to learn multilingual representations in the highresourced setting holds in the lowresourced one does not need to be true In this work we focus on two African languages Yorb and Twi and carry out several experiments to verify this claim Just by a simple inspection of the word embeddings trained on Wikipedia by fastText we see a high number of nonYorb or nonTwi words in the vocabularies For Twi the vocabulary has only 935 words and for Yorb we estimate that 135 k out of the 150 k words belong to other languages such as English French and Arabic In order to improve the semantic representations for these languages we collect online texts and study the influence of the quality and quantity of the data in the final models We also examine the most appropriate architecture depending on the characteristics of each language Finally we translate test sets and annotate corpora to evaluate the performance of both our models together with fastText and BERT pretrained embeddings which could not be evaluated otherwise for Yorb and Twi The evaluation is carried out in a word similarity and relatedness task using the wordsim353 test set and in a named entity recognition NER task where embeddings play a crucial role Of course the evaluation of the models in only two tasks is not exhaustive but it is an indication of the quality we can obtain for these two lowresourced languages as compared to others such as English where these evaluations are already available The rest of the paper is organized as follows Related works are reviewed in Section SECREF2 The two languages under study are described in Section SECREF3 We introduce the corpora and test sets in Section SECREF4 The fifth section explores the different training architectures we consider and the experiments that are carried out Finally discussion and concluding remarks are given in Section SECREF6 The large amount of freely available text in the internet for multiple languages is facilitating the massive and automatic creation of multilingual resources The resource par excellence is Wikipedia an online encyclopedia currently available in 307 languages Other initiatives such as Common Crawl or the Jehovahs Witnesses site are also repositories for multilingual data usually assumed to be noisier than Wikipedia Word and contextual embeddings have been pretrained on these data so that the resources are nowadays at hand for more than 100 languages Some examples include fastText word embeddings BIBREF2 BIBREF7 MUSE embeddings BIBREF8 BERT multilingual embeddings BIBREF4 and LASER sentence embeddings BIBREF5 In all cases embeddings are trained either simultaneously for multiple languages joining high and lowresource data or following the same methodology On the other hand different approaches try to specifically design architectures to learn embeddings in a lowresourced setting ChaudharyEtAl2018 follow a transfer learning approach that uses phonemes lemmas and morphological tags to transfer the knowledge from related highresource language into the lowresource one jiangEtal2018 apply PositiveUnlabeled Learning for word embedding calculations assuming that unobserved pairs of words in a corpus also convey information and this is specially important for small corpora In order to assess the quality of word embeddings word similarity and relatedness tasks are usually used wordsim353 BIBREF9 is a collection of 353 pairs annotated with semantic similarity scores in a scale from 0 to 10 Even the problems detected in this dataset BIBREF10 it is widely used by the community The test set was originally created for English but the need for comparison with other languages has motivated several translationsadaptations In hassanMihalcea2009 the test was translated manually into Spanish Romanian and Arabic and the scores were adapted to reflect similarities in the new language The reported correlation between the English scores and the Spanish ones is 086 Later JoubarneInkpen2011 show indications that the measures of similarity highly correlate across languages leviantReichart2015 translated also wordsim353 into German Italian and Russian and used crowdsourcing to score the pairs Finally jiangEtal2018 translated with Google Cloud the test set from English into Czech Danish and Dutch In our work native speakers translate wordsim353 into Yorb and Twi and similarity scores are kept unless the discrepancy with English is big see Section SECREF11 for details A similar approach to our work is done for Gujarati in JoshiEtAl2019 is a language in the West Africa with over 50 million speakers It is spoken among other languages in Nigeria republic of Togo Benin Republic Ghana and Sierra Leon It is also a language of rs in Cuba Brazil and some Caribbean countries It is one of the three major languages in Nigeria and it is regarded as the third most spoken native African language There are different dialects of Yorb in Nigeria BIBREF11 BIBREF12 BIBREF13 However in this paper our focus is the standard Yorb based upon a report from the 1974 Joint Consultative Committee on Education BIBREF14 Standard Yorb has 25 letters without the Latin characters c q v x and z There are 18 consonants b d f g gb jdz k l m n pkp r s t w yj 7 oral vowels a e i o u five nasal vowels an underaccentdoten in underaccentdoton un and syllabic nasals m Yorb is a tone language which makes heavy use of lexical tones which are indicated by the use of diacritics There are three tones in Yorb namely low mid and high which are represented as grave setminus macron and acute symbols respectively These tones are applied on vowels and syllabic nasals Mid tone is usually left unmarked on vowels and every initial or first vowel in a word cannot have a high tone It is important to note that tone information is needed for correct pronunciation and to have the meaning of a word BIBREF15 BIBREF12 BIBREF14 For example ow money w broom w business w honour w hand and w group are different words with different dots and diacritic combinations According to Asahiah2014 Standard Yorb uses 4 diacritics 3 are for marking tones while the fourth which is the dot below is used to indicate the open phonetic variants of letter e and o and the long variant of s Also there are 19 single diacritic letters 3 are marked with dots below while the rest are either having the grave or acute accent The four double diacritics are divided between the grave and the acute accent as well As noted in Asahiah2014 most of the Yorb texts found in websites or public domain repositories i either use the correct Yorb orthography or ii replace diacritized characters with undiacritized ones This happens as a result of many factors but most especially to the unavailability of appropriate input devices for the accurate application of the diacritical marks BIBREF11 This has led to research on restoration models for diacritics BIBREF16 but the problem is not well solved and we find that most Yorb text in the public domain today is not well diacritized Wikipedia is not an exception is an Akan language of the Central Tano Branch of the Niger Congo family of languages It is the most widely spoken of the about 80 indigenous languages in Ghana BIBREF17 It has about 9 million native speakers and about a total of 1718 million Ghanaians have it as either first or second language There are two mutually intelligible dialects Asante and Akuapem and subdialectical variants which are mostly unknown to and unnoticed by nonnative speakers It is also mutually intelligible with Fante and to a large extent Bono another of the Akan languages It is one of if not the easiest to learn to speak of the indigenous Ghanaian languages The same is however not true when it comes to reading and especially writing This is due to a number of easily overlooked complexities in the structure of the language First of all similarly to Yorb Twi is a tonal language but written without diacritics or accents As a result words which are pronounced differently and unambiguous in speech tend to be ambiguous in writing Besides most of such words fit interchangeably in the same context and some of them can have more than two meanings A simple example is Me papa aba nti na me ne wo redi no yie no S wo ara wo nim s me papa ba a me suban fofor adi This sentence could be translated as i Im only treating you nicely because Im in a good mood You already know Im a completely different person when Im in a good mood ii Im only treating you nicely because my dad is around You already know Im a completely different person when my dad comes around Another characteristic of Twi is the fact that a good number of stop words have the same written form as content words For instance na or na could be the words and then the phrase and then or the word mother This kind of ambiguity has consequences in several natural language applications where stop words are removed from text Finally we want to point out that words can also be written with or without prefixes An example is this same na and na which happen to be the same word with an omissible prefix across its multiple senses For some words the prefix characters are mostly used when the word begins a sentence and omitted in the middle This however depends on the authorspeaker For the word embeddings calculation this implies that one would have different embeddings for the same word found in different contexts We collect clean and noisy corpora for Yorb and Twi in order to quantify the effect of noise on the quality of the embeddings where noisy has a different meaning depending on the language as it will be explained in the next subsections For Yorb we use several corpora collected by the NigerVolta Language Technologies Institute with texts from different sources including the LagosNWU conversational speech corpus fullydiacritized Yorb language websites and an online Bible The largest source with clean data is the JW300 corpus We also created our own smallsized corpus by webcrawling three Yorb language websites Alkw r Yorb and d Yorb Rw in Table TABREF7 some Yoruba Tweets with full diacritics and also news corpora BBC Yorb and VON Yorb with poor diacritics which we use to introduce noise By noisy corpus we refer to texts with incorrect diacritics eg in BBC Yorb removal of tonal symbols eg in VON Yorb and removal of all diacriticsunderdots eg some articles in Yorb Wikipedia Furthermore we got two manually typed fullydiacritized Yorb literature rnkrind nn igb elgbje and Igb Oldmar both written by Daniel Orowole Olorunfemi Fagunwa a popular Yorb author The number of tokens available from each source the link to the original source and the quality of the data is summarised in Table TABREF7 The gathering of clean data in Twi is more difficult We use as the base text as it has been shown that the Bible is the most available resource for low and endangered languages BIBREF18 This is the cleanest of all the text we could obtain In addition we use the available and small Wikipedia dumps which are quite noisy ie Wikipedia contains a good number of English words spelling errors and Twi sentences formulated in a nonnatural way formulated as L2 speakers would speak Twi as compared to native speakers Lastly we added text crawled from jw and the JW300 Twi corpus Notice that the Bible text is mainly written in the Asante dialect whilst the last Jehovahs Witnesses was written mainly in the Akuapem dialect The Wikipedia text is a mixture of the two dialects This introduces a lot of noise into the embeddings as the spelling of most words differs especially at the end of the words due to the mixture of dialects The JW300 Twi corpus also contains mixed dialects but is mainly Akuampem In this case the noise comes also from spelling errors and the uncommon addition of diacritics which are not standardised on certain vowels Figures for Twi corpora are summarised in the bottom block of Table TABREF7 One of the contribution of this work is the introduction of the wordsim353 word pairs dataset for Yorb All the 353 word pairs were translated from English to Yorb by 3 native speakers The set is composed of 446 unique English words 348 of which can be expressed as oneword translation in Yorb eg book translates to w In 61 cases most countries and locations but also other content words translations are transliterations eg Doctor is dkt and cucumber kkmb 98 words were translated by short phrases instead of single words This mostly affects words from science and technology eg keyboard translates to ptk tw literally meaning typing board laboratory translates to yr wd research room and ecology translates to m npa yk while psychology translates to m npa d Finally 6 terms have the same form in English and Yorb therefore they are retained like that in the dataset eg Jazz Rock and acronyms such as FBI or OPEC We also annotate the Global Voices Yorb corpus to test the performance of our trained Yorb BERT embeddings on the named entity recognition task The corpus consists of 25 k tokens which we annotate with four named entity types DATE location LOC organization ORG and personal names PER Any other token that does not belong to the four named entities is tagged with O The dataset is further split into training 70 development 10 and test 20 partitions Table TABREF12 shows the number of named entities per type and partition Just like Yorb the wordsim353 word pairs dataset was translated for Twi Out of the 353 word pairs 274 were used in this case The remaining 79 pairs contain words that translate into longer phrases The number of words that can be translated by a single token is higher than for Yorb Within the 274 pairs there are 351 unique English words which translated to 310 unique Twi words 298 of the 310 Twi words are single word translations 4 transliterations and 16 are used as is Even if JoubarneInkpen2011 showed indications that semantic similarity has a high correlation across languages different nuances between words are captured differently by languages For instance both money and currency in English translate into sika in Twi and other 32 English words which translate to 14 Twi words belong to this category and drink in English is translated as Nsa or nom depending on the part of speech noun for the former verb for the latter 17 English words fall into this category In translating these we picked the translation that best suits the context other word in the pair In two cases the correlation is not fulfilled at all soapopera and starmovies are not related in the Twi language and the score has been modified accordingly In this section we describe the architectures used for learning word embeddings for the Twi and Yorb languages Also we discuss the quality of the embeddings as measured by the correlation with human judgements on the translated wordSim353 test sets and by the F1 score in a NER task Modeling subword units has recently become a popular way to address outofvocabulary word problem in NLP especially in word representation learning BIBREF19 BIBREF2 BIBREF4 A subword unit can be a character character ngrams or heuristically learned Byte Pair Encodings BPE which work very well in practice especially for morphologically rich languages Here we consider two word embedding models that make use of characterlevel information together with word information Character Word Embedding CWE BIBREF20 and fastText BIBREF2 Both of them are extensions of the Word2Vec architectures BIBREF0 that model subword units character embeddings in the case of CWE and character ngrams for fastText CWE was introduced in 2015 to model the embeddings of characters jointly with words in order to address the issues of character ambiguities and noncompositional words especially in the Chinese language A word or character embedding is learned in CWE using either CBOW or skipgram architectures and then the final word embedding is computed by adding the character embeddings to the word itself where wj is the word embedding of xj Nj is the number of characters in xj and ck is the embedding of the kth character ck in xj Similarly in 2017 fastText was introduced as an extension to skipgram in order to take into account morphology and improve the representation of rare words In this case the embedding of a word also includes the embeddings of its character ngrams where wj is the word embedding of xj Gj is the number of character ngrams in xj and gk is the embedding of the kth ngram cwe also proposed three alternatives to learn multiple embeddings per character and resolve ambiguities i positionbased character embeddings where each character has different embeddings depending on the position it appears in a word ie beginning middle or end ii clusterbased character embeddings where a character can have K different cluster embeddings and iii positionbased cluster embeddings CWELP where for each position K different embeddings are learned We use the latter in our experiments with CWE but no positional embeddings are used with fastText Finally we consider a contextualized embedding architecture BERT BIBREF4 BERT is a masked language model based on the highly efficient and parallelizable Transformer architecture BIBREF21 known to produce very rich contextualized representations for downstream NLP tasks The architecture is trained by jointly conditioning on both left and right contexts in all the transformer layers using two unsupervised objectives Masked LM and Nextsentence prediction The representation of a word is therefore learned according to the context it is found in Training contextual embeddings needs of huge amounts of corpora which are not available for lowresourced languages such as Yorb and Twi However Google provided pretrained multilingual embeddings for 102 languages including Yorb but not Twi As a first experiment we compare the quality of fastText embeddings trained on highquality curated data and lowquality massively extracted data for Twi and Yorb languages Facebook released pretrained word embeddings using fastText for 294 languages trained on Wikipedia BIBREF2 F1 in tables and for 157 languages trained on Wikipedia and Common Crawl BIBREF7 F2 For Yorb both versions are available but only embeddings trained on Wikipedia are available for Twi We consider these embeddings the result of training on what we call massivelyextracted corpora Notice that training settings for both embeddings are not exactly the same and differences in performance might come both from corpus sizequality but also from the background model The 294languages version is trained using skipgram in dimension 300 with character ngrams of length 5 a window of size 5 and 5 negatives The 157languages version is trained using CBOW with positionweights in dimension 300 with character ngrams of length 5 a window of size 5 and 10 negatives We want to compare the performance of these embeddings with the equivalent models that can be obtained by training on the different sources verified by native speakers of Twi and Yorb what we call curated corpora and has been described in Section SECREF4 For the comparison we define 3 datasets according to the quality and quantity of textual data used for training i Curated Small Dataset clean C1 about 16 million tokens for Yorb and over 735 k tokens for Twi The clean text for Twi is the Bible and for Yoruba all texts marked under the C1 column in Table TABREF7 ii In Curated Small Dataset clean noisy C2 we add noise to the clean corpus Wikipedia articles for Twi and BBC Yorb news articles for Yorb This increases the number of training tokens for Twi to 742 k tokens and Yorb to about 2 million tokens iii Curated Large Dataset C3 consists of all available texts we are able to crawl and source out for either clean or noisy The addition of JW300 BIBREF22 texts increases the vocabulary to more than 10 k tokens in both languages We train our fastText systems using a skipgram model with an embedding size of 300 dimensions context window size of 5 10 negatives and ngrams ranging from 3 to 6 characters similarly to the pretrained models for both languages Best results are obtained with minimum word count of 3 Table TABREF15 shows the Spearman correlation between human judgements and cosine similarity scores on the wordSim353 test set Notice that pretrained embeddings on Wikipedia show a very low correlation with humans on the similarity task for both languages rho 014 and their performance is even lower when Common Crawl is also considered rho 007 for Yorb An important reason for the low performance is the limited vocabulary The pretrained Twi model has only 935 tokens For Yorb things are apparently better with more than 150 k tokens when both Wikipedia and Common Crawl are used but correlation is even lower An inspection of the pretrained embeddings indicates that over 135 k words belong to other languages mostly English French and Arabic If we focus only on Wikipedia we see that many texts are without diacritics in Yorb and often make use of mixed dialects and English sentences in Twi The Spearman rho correlation for fastText models on the curated small dataset clean C1 improves the baselines by a large margin rho 0354 for Twi and 0322 for Yorb even with a small dataset The improvement could be justified just by the larger vocabulary in Twi but in the case of Yorb the enhancement is there with almost half of the vocabulary size We found out that adding some noisy texts C2 dataset slightly improves the correlation for Twi language but not for the Yorb language The Twi language benefits from Wikipedia articles because its inclusion doubles the vocabulary and reduces the bias of the model towards religious texts However for Yorb noisy texts often ignore diacritics or tonal marks which increases the vocabulary size at the cost of an increment in the ambiguity too As a result the correlation is slightly hurt One would expect that training with more data would improve the quality of the embeddings but we found out with the results obtained with the C3 dataset that only highquality data helps The addition of JW300 boosts the vocabulary in both cases but whereas for Twi the corpus mixes dialects and is noisy for Yorb it is very clean and with full diacritics Consequently the best embeddings for Yorb are obtained when training with the C3 dataset whereas for Twi C2 is the best option In both cases the curated embeddings improve the correlation with human judgements on the similarity task a Delta rho 025 or equivalently by an increment on rho of 170 Twi and 180 Yorb The huge ambiguity in the written Twi language motivates the exploration of different approaches to word embedding estimations In this work we compare the standard fastText methodology to include subword information with the characterenhanced approach with positionbased clustered embeddings CWELP as introduced in Section SECREF17 With the latter we expect to specifically address the ambiguity present in a language that does not translate the different oral tones on vowels into the written language The characterenhanced word embeddings are trained using a skipgram architecture with clusterbased embeddings and an embedding size of 300 dimensions context windowsize of 5 and 5 negative samples In this case the best performance is obtained with a minimum word count of 1 and that increases the effective vocabulary that is used for training the embeddings with respect to the fastText experiments reported in Table TABREF15 We repeat the same experiments as with fastText and summarise them in Table TABREF16 If we compare the relative numbers for the three datasets C1 C2 and C3 we observe the same trends as before the performance of the embeddings in the similarity task improves with the vocabulary size when the training data can be considered clean but the performance diminishes when the data is noisy According to the results CWE is specially beneficial for Twi but not always for Yorb Clean Yorb text does not have the ambiguity issues at characterlevel therefore the ngram approximation works better when enough clean data is used rho C3CWE0354 vs rho C3fastText0391 but it does not when too much noisy data no diacritics therefore characterlevel information would be needed is used rho C2CWE0345 vs rho C2fastText0302 For Twi the characterlevel information reinforces the benefits of clean data and the best correlation with human judgements is reached with CWE embeddings rho C2CWE0437 vs rho C2fastText0388 In order to go beyond the similarity task using static word vectors we also investigate the quality of the multilingual BERT embeddings by finetuning a named entity recognition task on the Yorb Global Voices corpus One of the major advantages of pretrained BERT embeddings is that finetuning of the model on downstream NLP tasks is typically computationally inexpensive often with few number of epochs However the data the embeddings are trained on has the same limitations as that used in massive word embeddings Finetuning involves replacing the last layer of BERT used optimizing the masked LM with a taskdependent linear classifier or any other deep learning architecture and training all the model parameters endtoend For the NER task we obtain the tokenlevel representation from BERT and train a linear classifier for sequence tagging Similar to our observations with noncontextualized embeddings we find out that finetuning the pretrained multilingualuncased BERT for 4 epochs on the NER task gives an F1 score of 0 If we do the same experiment in English F1 is 581 after 4 epochs That shows how pretrained embeddings by themselves do not perform well in downstream tasks on lowresource languages To address this problem for Yorb we finetune BERT representations on the Yorb corpus in two ways i using the multilingual vocabulary and ii using only Yorb vocabulary In both cases diacritics are ignored to be consistent with the base model training As expected the finetuning of the pretrained BERT on the Yorb corpus in the two configurations generates better representations than the base model These models are able to achieve a better performance on the NER task with an average F1 score of over 47 see Table TABREF26 for the comparative The finetuned BERT model with only Yorb vocabulary further increases by more than 4 in F1 score obtained with the tuning that uses the multilingual vocabulary Although we do not have enough data to train BERT from scratch we observe that finetuning BERT on a limited amount of monolingual data of a lowresource language helps to improve the quality of the embeddings The same observation holds true for highresource languages like German and French BIBREF23 In this paper we present curated word and contextual embeddings for Yorb and Twi For this purpose we gather and select corpora and study the most appropriate techniques for the languages We also create test sets for the evaluation of the word embeddings within a word similarity task wordsim353 and the contextual embeddings within a NER task Corpora embeddings and test sets are available in github In our analysis we show how massively generated embeddings perform poorly for lowresourced languages as compared to the performance for highresourced ones This is due both to the quantity but also the quality of the data used While the Pearson rho correlation for English obtained with fastText embeddings trained on Wikipedia WP and Common Crawl CC are rho WP067 and rho WPCC078 the equivalent ones for Yorb are rho WP014 and rho WPCC007 For Twi only embeddings with Wikipedia are available rho WP014 By carefully gathering highquality data and optimising the models to the characteristics of each language we deliver embeddings with correlations of rho 039 Yorb and rho 044 Twi on the same test set still far from the highresourced models but representing an improvement over 170 on the task In a lowresourced setting the data quality processing and model selection is more critical than in a highresourced scenario We show how the characteristics of a language such as diacritization in our case should be taken into account in order to choose the relevant data and model to use As an example Twi word embeddings are significantly better when training on 742 k selected tokens than on 16 million noisy tokens and when using a model that takes into account single character information CWELP instead of ngram information fastText Finally we want to note that even within a corpus the quality of the data might depend on the language Wikipedia is usually used as a highquality freely available multilingual corpus as compared to noisier data such as Common Crawl However for the two languages under study Wikipedia resulted to have too much noise interference from other languages text clearly written by nonnative speakers lack of diacritics and mixture of dialects The JW300 corpus on the other hand has been rated as highquality by our native Yorb speakers but as noisy by our native Twi speakers In both cases experiments confirm the conclusions The authors thank Dr Clement Odoje of the Department of Linguistics and African Languages University of Ibadan Nigeria and Oly Gbmisy rd for helping us with the Yorb translation of the WordSim353 word pairs and Dr Felix Y AduGyamfi and Ps Isaac Sarfo for helping with the Twi translation We also thank the members of the NigerVolta Language Technologies Institute for providing us with clean Yorb corpus The project on which this paper is based was partially funded by the German Federal Ministry of Education and Research under the funding code 01IW17001 Deeplee Responsibility for the content of this publication is with the authors </s>
<s>[INST] Is there Gender bias and stereotype in Portuguese Word Embeddings? [/INST] Recently the transformative potential of machine learning ML has propelled ML into the forefront of mainstream media In Brazil the use of such technique has been widely diffused gaining more space Thus it is used to search for patterns regularities or even concepts expressed in data sets BIBREF0 and can be applied as a form of aid in several areas of everyday life Among the different definitions ML can be seen as the ability to improve performance in accomplishing a task through the experience BIBREF1 Thus BIBREF2 presents this as a method of inferences of functions or hypotheses capable of solving a problem algorithmically from data representing instances of the problem This is an important way to solve different types of problems that permeate computer science and other areas One of the main uses of ML is in text processing where the analysis of the content the entry point for various learning algorithms However the use of this content can represent the insertion of different types of bias in training and may vary with the context worked This work aims to analyze and remove gender stereotypes from word embedding in Portuguese analogous to what was done in BIBREF3 for the English language Hence we propose to employ a public word2vec model pretrained to analyze gender bias in the Portuguese language quantifying biases present in the model so that it is possible to reduce the spreading of sexism of such models There is also a stage of bias reducing over the results obtained in the model where it is sought to analyze the effects of the application of gender distinction reduction techniques This paper is organized as follows Section SECREF2 discusses related works Section SECREF3 presents the Portuguese word2vec embeddings model used in this paper and Section SECREF4 proposes our method Section SECREF5 presents experimental results whose purpose is to verify results of a debias algorithm application in Portuguese embeddings word2vec model and a short discussion about it Section SECREF6 brings our concluding remarks There is a wide range of techniques that provide interesting results in the context of ML algorithms geared to the classification of data without discrimination these techniques range from the preprocessing of data BIBREF4 to the use of bias removal techniques BIBREF5 in fact Approaches linked to the data preprocessing step usually consist of methods based on improving the quality of the dataset after which the usual classification tools can be used to train a classifier So it starts from a baseline already stipulated by the execution of itself On the other side of the spectrum there are Unsupervised and semisupervised learning techniques that are attractive because they do not imply the cost of corpus annotation BIBREF6 BIBREF7 BIBREF8 BIBREF9 The bias reduction is studied as a way to reduce discrimination through classification through different approaches BIBREF10 BIBREF11 In BIBREF12 the authors propose to specify implement and evaluate the fairnessaware ML interface called themisml In this interface the main idea is to pick up a data set from a modified dataset Themisml implements two methods for training fairnessaware models The tool relies on two methods to make agnostic model type predictions Reject Option Classification and DiscriminationAware Ensemble Classification these procedures being used to postprocess predictions in a way that reduces potentially discriminatory predictions According to the authors it is possible to perceive the potential use of the method as a means of reducing bias in the use of ML algorithms In BIBREF3 the authors propose a method to hardly reduce bias in English word embeddings collected from Google News Using word2vec they performed a geometric analysis of gender direction of the bias contained in the data Using this property with the generation of genderneutral analogies a methodology was provided for modifying an embedding to remove gender stereotypes Some metrics were defined to quantify both direct and indirect gender biases in embeddings and to develop algorithms to reduce bias in some embedding Hence the authors show that embeddings can be used in applications without amplifying gender bias In BIBREF13 the quality of the representation of words through vectors in several models is discussed According to the authors the ability to train highquality models using simplified architectures is useful in models composed of predictive methods that try to predict neighboring words with one or more context words such as Word2Vec Word embeddings have been used to provide meaningful representations for words in an efficient way In BIBREF14 several word embedding models trained in a large Portuguese corpus are evaluated Within the Word2Vec model two training strategies were used In the first namely SkipGram the model is given the word and attempts to predict its neighboring words The second Continuous BagofWords CBOW the model is given the sequence of words without the middle one and attempts to predict this omitted word The latter was chosen for application in the present proposal The authors of BIBREF14 claim to have collected a large corpus from several sources to obtain a multigenre corpus representative of the Portuguese language Hence it comprehensively covers different expressions of the language making it possible to analyze gender bias and stereotype in Portuguese word embeddings The dataset used was tokenized and normalized by the authors to reduce the corpus vocabulary size under the premise that vocabulary reduction provides more representative vectors Some linguists point out that the female gender is in Portuguese a particularization of the masculine In this way the only gender mark is the feminine the others being considered without gender including names considered masculine In BIBREF15 the gender representation in Portuguese is associated with a set of phenomena not only from a linguistic perspective but also from a sociocultural perspective Since most of the termination of words eg advogada and advogado are used to indicate to whom the expression refers stereotypes can be explained through communication This implies the presence of biases when dealing with terms such as those referring to professions Figure FIGREF1 illustrates the approach proposed in this work First using a list of professions relating the identification of female and male who perform it as a parameter we evaluate the accuracy of similarity generated by the embeddings Then getting the biased results we apply the Debias algorithm BIBREF3 aiming to reduce sexist analogies previous generated Thus all the results are analyzed by comparing the accuracies Using the word2vec model available in a public repository BIBREF14 the proposal involves the analysis of the most similar analogies generated before and after the application of the BIBREF3 The work is focused on the analysis of gender bias associated with professions in word embeddings So therefore into the evaluation of the accuracy of the associations generated aiming at achieving results as good as possible without prejudicing the evaluation metrics Algorithm SECREF4 describes the method performed during the evaluation of the gender bias presence In this method we try to evaluate the accuracy of the analogies generated through the model that is to verify the cases of association matching generated between the words htb Model Evaluation 1 w2vevaluate INLINEFORM0 openmodel INLINEFORM1 count 0 INLINEFORM2 in INLINEFORM3 read list of tuples x modelmostsimilarpositiveela male negativeele x female count 1 accuracy countsizeprofessionpairs return accuracy The purpose of this section is to perform different analysis concerning bias in word2vec models with Portuguese embeddings The Continuous BagofWords model used was provided by BIBREF14 described in Section SECREF3 For these experiments we use a model containing 934966 words of dimension 300 per vector representation To realize the experiments a list containing fifty professions labels for female and male was used as the parameter of similarity comparison Using the python library gensim we evaluate the extreme analogies generated when comparing vectors like INLINEFORM0 where INLINEFORM1 represents the item from professions list and INLINEFORM2 the expected association The most similarity function finds the topN most similar entities computing cosine similarity between a simple mean of the projection weight vectors of the given docs Figure FIGREF4 presents the most extreme analogies results obtained from the model using these comparisons Applying the Algorithm SECREF4 we check the accuracy obtained with the similarity function before and after the application of the debias method Table TABREF3 presents the corresponding results In cases like the analogy of garonete to stripper Figure FIGREF4 line 8 it is possible to observe that the relationship stipulated between terms with sexual connotation and females is closer than between females and professions While in the male model even in cases of noncompliance the closest analogy remains in the professional environment Using a confidence factor of 99 when comparing the correctness levels of the model with and without the reduction of bias the prediction of the model with bias is significantly better Different authors BIBREF16 BIBREF17 show that the removal of bias in models produces a negative impact on the quality of the model On the other hand it is observed that even with a better hit rate the correctness rate in the prediction of related terms is still low This paper presents an analysis of the presence of gender bias in Portuguese word embeddings Even though it is a work in progress the proposal showed promising results in analyzing predicting models A possible extension of the work involves deepening the analysis of the results obtained seeking to achieve higher accuracy rates and fairer models to be used in machine learning techniques Thus these studies can involve tests with different methods of preprocessing the data to the use of different models as well as other factors that may influence the results generated This deepening is necessary since the models accuracy is not high To conclude we believe that the presence of gender bias and stereotypes in the Portuguese language is found in different spheres of language and it is important to study ways of mitigating different types of discrimination As such it can be easily applied to analyze racists bias into the language such as different types of preconceptions </s>
<s>[INST] Citation Data of Czech Apex Courts [/INST] Analysis of the way court decisions refer to each other provides us with important insights into the decisionmaking process at courts This is true both for the common law courts and for their counterparts in the countries belonging to the continental legal system Citation data can be used for both qualitative and quantitative studies casting light in the behavior of specific judges through document analysis or allowing complex studies into changing the nature of courts in transforming countries That being said it is still difficult to create sufficiently large citation datasets to allow a complex research In the case of the Czech Republic it was difficult to obtain a relevant dataset of the court decisions of the apex courts Supreme Court Supreme Administrative Court and Constitutional Court Due to its size it is nearly impossible to extract the references manually One has to reach out for an automation of such task However study of court decisions displayed many different ways that courts use to cite even decisions of their own not to mention the decisions of other courtsThe great diversity in citations led us to the use of means of the natural language processing for the recognition and the extraction of the citation data from court decisions of the Czech apex courts In this paper we describe the tool ultimately used for the extraction of the references from the court decisions together with a subsequent way of manual processing of the raw data to achieve a higherquality dataset Section SECREF2 maps the related work in the area of legal citation analysis SectionSECREF1 reference recognition Section SECREF2 text segmentation Section SECREF4 and data availability Section SECREF3 Section SECREF3 describes the method we used for the citation extraction listing the individual models and the way we have combined these models into the NLP pipeline Section SECREF4 presents results in the terms of evaluation of the performance of our pipeline the statistics of the raw data further manual processing and statistics of the final citation dataset Section SECREF5 discusses limitations of our work and outlines the possible future development Section SECREF6 concludes this paper The legal citation analysis is an emerging phenomenon in the field of the legal theory and the legal empirical researchThe legal citation analysis employs tools provided by the field of network analysis In spite of the longterm use of the citations in the legal domain eg the use of Shepards Citations since 1873 interest in the network citation analysis increased significantly when Fowler et al published the two pivotal works on the case law citations by the Supreme Court of the United States BIBREF0 BIBREF1 Authors used the citation data and network analysis to test the hypotheses about the function of stare decisis the doctrine and other issues of legal precedents In the continental legal system this work was followed by Winkels and de Ruyter BIBREF2 Authors adopted similar approach to Fowler to the court decisions of the Dutch Supreme Court Similar methods were later used by Derln and Lindholm BIBREF3 BIBREF4 and Panagis and adl BIBREF5 for the citation data of the Court of Justice of the European Union and by Olsen and Kksu for the citation data of the European Court of Human Rights BIBREF6 Additionally a minor part in research in the legal network analysis resulted in the past in practical tools designed to help lawyers conduct the case law research Kuppevelt and van Dijck built prototypes employing these techniques in the Netherlands BIBREF7 Grg a Weisz introduced the new legal information retrieval system Justeus based on a large database of the legal sources and partly on the network analysis methods BIBREF8 The area of reference recognition already contains a large amount of work It is concerned with recognizing text spans in documents that are referring to other documents As such it is a classical topic within the AI Law literature The extraction of references from the Italian legislation based on regular expressions was reported by Palmirani et al BIBREF9 The main goal was to bring references under a set of common standards to ensure the interoperability between different legal information systems De Maat et al BIBREF10 focused on an automated detection of references to legal acts in Dutch language Their approach consisted of a grammar covering increasingly complex citation patterns Opijnen BIBREF11 aimed for a reference recognition and a reference standardization using regular expressions accounting for multiple the variant of the same reference and multiple vendorspecific identifiers The language specific work by Kr et al BIBREF12 focused on the detecting and classification references to other court decisions and legal acts Authors used a statistical recognition HMM and Perceptron algorithms and reported F1measure over 90 averaged over all entities It is the stateofart in the automatic recognition of references in the Czech court decisions Unfortunately it allows only for the detection of docket numbers and it is unable to recognize courtspecific or vendorspecific identifiers in the court decisions Other language specificwork includes our previous reference recognition model presented in BIBREF13 Prediction model is based on conditional random fields and it allows recognition of different constituents which then establish both explicit and implicit caselaw and doctrinal references Parts of this model were used in the pipeline described further within this paper in Section SECREF3 Large scale quantitative and qualitative studies are often hindered by the unavailability of court data Access to court decisions is often hindered by different obstacles In some countries court decisions are not available at all while in some other they are accessible only through legal information systems often proprietary This effectively restricts the access to court decisions in terms of the bulk data This issue was already approached by many researchers either through making available selected data for computational linguistics studies or by making available datasets of digitized data for various purposes Nonexhaustive list of publicly available corpora includes British Law Report Corpus BIBREF14 The Corpus of US Supreme Court Opinions BIBREF15the HOLJ corpus BIBREF16 the Corpus of Historical English Law Reports Corpus de Sentencias Penales BIBREF17 Juristisches Referenzkorpus BIBREF18 and many others Language specific work in this area is presented by the publicly available Czech Court Decisions Corpus CzCDC 10 BIBREF19 This corpus contains majority of court decisions of the Czech Supreme Court the Supreme Administrative Court and the Constitutional Court hence allowing a largescale extraction of references to yield representative results The CzCDC 10 was used as a dataset for extraction of the references as is described further within this paper in Section SECREF3 Unfortunately despite containing 237 723 court decisions issued between 1st January 1993 and 30th September 2018 it is not complete This fact is reflected in the analysis of the results A large volume of legal information is available in unstructured form which makes processing these data a challenging task both for human lawyers and for computers Schweighofer BIBREF20 called for generic tools allowing a document segmentation to ease the processing of unstructured data by giving them some structure Topicbased segmentation often focuses on the identifying specific sentences that present borderlines of different textual segments The automatic segmentation is not an individual goal it always serves as a prerequisite for further tasks requiring structured data Segmentation is required for the text summarization BIBREF21 BIBREF22 keyword extraction BIBREF23 textual information retrieval BIBREF24 and other applications requiring input in the form of structured data Major part of research is focused on semantic similarity methodsThe computing similarity between the parts of text presumes that a decrease of similarity means a topical border of two text segments This approach was introduced by Hearst BIBREF22 and was used by Choi BIBREF25 and Heinonen BIBREF26 as well Another approach takes word frequencies and presumes a border according to different key words extracted Reynar BIBREF27 authored graphical method based on statistics called dotplotting Similar techniques were used by Ye BIBREF28 or Saravanan BIBREF29 Bommarito et al BIBREF30 introduced a Python library combining different features including pretrained models to the use for automatic legal text segmentation Li BIBREF31 included neural network into his method to segment Chinese legal texts avelka and Ashley BIBREF32 similarly introduced the machine learning based approach for the segmentation of US court decisions texts into seven different parts Authors reached high success rates in recognizing especially the Introduction and Analysis parts of the decisions Language specific work includes the model presented by Harata et al BIBREF33 This work focuses on segmentation of the Czech court decisions into predefined topical segments Parts of this segmentation model were used in the pipeline described further within this paper in Section SECREF3 In this paper we present and describe the citation dataset of the Czech toptier courts To obtain this dataset we have processed the court decisions contained in CzCDC 10 dataset by the NLP pipeline consisting of the segmentation model introduced in BIBREF33 and parts of the reference recognition model presented in BIBREF13 The process is described in this section Novotn and Harata BIBREF19 prepared a dataset of the court decisions of the Czech Supreme Court the Supreme Administrative Court and the Constitutional Court The dataset contains 237723 decisions published between 1st January 1993 and the 30th September 2018 These decisions are organised into three subcorpora The subcorpus of the Supreme Court contains 111977 decisions the subcorpus of the Supreme Administrative Court contains 52660 decisions and the subcorpus of the Constitutional Court contains 73086 decisions Authors in BIBREF19 assessed that the CzCDC currently contains approximately 91 of all decisions of the Supreme Court 995 of all decisions of the Constitutional Court and 999 of all decisions of the Supreme Administrative Court As such it presents the best currently available dataset of the Czech toptier court decisions Harata and avelka BIBREF13 introduced a reference recognition model trained specifically for the Czech toptier courts Moreover authors made their training data available in the BIBREF34 Given the lack of a single citation standard references in this work consist of smaller units because these were identified as more uniform and therefore better suited for the automatic detection The model was trained using conditional random fields which is a random field model that is globally conditioned on an observation sequence O The states of the model correspond to event labels E Authors used a firstorder conditional random fields Model was trained for each type of the smaller unit independently Harata et al BIBREF33 authors introduced the model for the automatic segmentation of the Czech court decisions into predefined multiparagraph parts These segments include the Header introduction of given case History procedural history prior the apex court proceeding SubmissionRejoinder petition of plaintiff and response of defendant Argumentation argumentation of the court hearing the case Footer legally required information such as information about further proceedings Dissent and Footnotes The model for automatic segmentation of the text was trained using conditional random fields The model was trained for each type independently In order to obtain the citation data of the Czech apex courts it was necessary to recognize and extract the references from the CzCDC 10 Given that training data for both the reference recognition model BIBREF13 BIBREF34 and the text segmentation model BIBREF33 are publicly available we were able to conduct extensive error analysis and put together a pipeline to arguably achieve the maximum efficiency in the task The pipeline described in this part is graphically represented in Figure FIGREF10 As the first step every document in the CzCDC 10 was segmented using the text segmentation model This allowed us to treat different parts of processed court documents differently in the further text processing Specifically it allowed us to subject only the specific part of a court decision in this case the court argumentation to further the reference recognition and extraction A textual segment recognised as the court argumentation is then processed further As the second step parts recognised by the text segmentation model as a court argumentation was processed using the reference recognition model After carefully studying the evaluation of the models performance in BIBREF13 we have decided to use only part of the said model Specifically we have employed the recognition of the court identifiers as we consider the rest of the smaller units introduced by Harata and avelka of a lesser value for our task Also deploying only the recognition of the court identifiers allowed us to avoid the problematic parsing of smaller textual units into the references The text spans recognised as identifiers of court decisions are then processed further At this point it is necessary to evaluate the performance of the above mentioned part of the pipeline before proceeding further The evaluation of the performance is summarised in Table TABREF11 It shows that organising the two models into the pipeline boosted the performance of the reference recognition model leading to a higher F1 measure in the initial recognition of the text spans and their classification Further processing included control and repair of incompletely identified court identifiers manual identification and sorting of identifiers as belonging to Supreme Court Supreme Administrative Court or Constitutional Court rulebased manual standardisation of different types of court identifiers rulebased manual parsing of identifiers with court decisions available in CzCDC 10 Overall through the process described in Section SECREF3 we have retrieved three datasets of extracted references one dataset per each of the apex courts These datasets consist of the individual pairs containing the identification of the decision from which the reference was retrieved and the identification of the referred documents As we only extracted references to other judicial decisions we obtained 471319 references from Supreme Court decisions 167237 references from Supreme Administrative Court decisions and 264463 references from Constitutional Court Decisions These are numbers of text spans identified as references prior the further processing described in Section SECREF3 These references include all identifiers extracted from the court decisions contained in the CzCDC 10 Therefore this number includes all other court decisions including lower courts the Court of Justice of the European Union the European Court of Human Rights decisions of other public authorities etc Therefore it was necessary to classify these into references referring to decisions of the Supreme Court Supreme Administrative Court Constitutional Court and others These groups then underwent a standardisation or more precisely a resolution of different court identifiers used by the Czech courts Numbers of the references resulting from this step are shown in Table TABREF16 Following this step we linked court identifiers with court decisions contained in the CzCDC 10 Given that the CzCDC 10 does not contain all the decisions of the respective courts we were not able to parse all the references Numbers of the references resulting from this step are shown in Table TABREF17 This paper introduced the first dataset of citation data of the three Czech apex courts Understandably there are some pitfalls and limitations to our approach As we admitted in the evaluation in Section SECREF9 the models we included in our NLP pipelines are far from perfect Overall we were able to achieve a reasonable recall and precision rate which was further enhanced by several round of manual processing of the resulting data However it is safe to say that we did not manage to extract all the references Similarly because the CzCDC 10 dataset we used does not contain all the decisions of the respective courts we were not able to parse all court identifiers to the documents these refer to Therefore the future work in this area may include further development of the resources we used The CzCDC 10 would benefit from the inclusion of more documents of the Supreme Court the reference recognition model would benefit from more refined training methods etc That being said the presented dataset is currently the only available resource of its kind focusing on the Czech court decisions that is freely available to research teams This significantly reduces the costs necessary to conduct these types of studies involving network analysis and the similar techniques requiring a large amount of citation data In this paper we have described the process of the creation of the first dataset of citation data of the three Czech apex courts The dataset is publicly available for download at httpsgithubcomczechcaselawrelevanceczechcourtcitationsdataset JH and TN gratefully acknowledge the support from the Czech Science Foundation under grant no GA1720645S TN also acknowledges the institutional support of the Masaryk University This paper was presented at CEILI Workshop on Legal Data Analysis held in conjunction with Jurix 2019 in Madrid Spain </s>
<s>[INST] LAXARY: A Trustworthy Explainable Twitter Analysis Model for Post-Traumatic Stress Disorder Assessment [/INST] Combat veterans diagnosed with PTSD are substantially more likely to engage in a number of high risk activities including engaging in interpersonal violence attempting suicide committing suicide binge drinking and drug abuse BIBREF0 Despite improved diagnostic screening outpatient mental health and inpatient treatment for PTSD the syndrome remains treatment resistant is typically chronic and is associated with numerous negative health effects and higher treatment costs BIBREF1 As a result the Veteran Administrations National Center for PTSD NCPTSD suggests to reconceptualize PTSD not just in terms of a psychiatric symptom cluster but focusing instead on the specific high risk behaviors associated with it as these may be directly addressed though behavioral change efforts BIBREF0 Consensus prevalence estimates suggest that PTSD impacts between 1520 of the veteran population which is typically chronic and treatment resistant BIBREF0 The PTSD patients support programs organized by different veterans peer support organization use a set of surveys for local weekly assessment to detect the intensity of PTSD among the returning veterans However recent advanced evidencebased care for PTSD sufferers surveys have showed that veterans suffered with chronic PTSD are reluctant in participating assessments to the professionals which is another significant symptom of war returning veterans with PTSD Several existing researches showed that twitter posts of war veterans could be a significant indicator of their mental health and could be utilized to predict PTSD sufferers in time before going out of control BIBREF2 BIBREF3 BIBREF4 BIBREF5 BIBREF6 BIBREF7 BIBREF8 However all of the proposed methods relied on either blackbox machine learning methods or language models based sentiments extraction of posted texts which failed to obtain acceptability and trust of clinicians due to the lack of their explainability In the context of the above research problem we aim to answer the following research questions Given clinicians have trust on clinically validated PTSD assessment surveys can we fill out PTSD assessment surveys using twitter posts analysis of warveterans If possible what sort of analysis and approach are needed to develop such XAI model to detect the prevalence and intensity of PTSD among warveterans only using the social media twitter analysis where users are free to share their everyday mental and social conditions How much quantitative improvement do we observe in our models ability to explain both detection and intensity estimation of PTSD In this paper we propose LAXARY an explainable and trustworthy representation of PTSD classification and its intensity for clinicians The key contributions of our work are summarized below The novelty of LAXARY lies on the proposed clinical surveysbased PTSD Linguistic dictionary creation with wordsaspects which represents the instantaneous perturbation of twitterbased sentiments as a specific pattern and help calculate the possible scores of each survey question LAXARY includes a modified LIWC model to calculate the possible scores of each survey question using PTSD Linguistic Dictionary to fill out the PTSD assessment surveys which provides a practical way not only to determine finegrained discrimination of physiological and psychological health markers of PTSD without incurring the expensive and laborious insitu laboratory testing or surveys but also obtain trusts of clinicians who are expected to see traditional survey results of the PTSD assessment Finally we evaluate the accuracy of LAXARY model performance and reliabilityvalidity of generated PTSD Linguistic Dictionary using real twitter users posts Our results show that given normal weekly messages posted in twitter LAXARY can provide very high accuracy in filling up surveys towards identifying PTSD approx 96 and its intensity approx 12 mean squared error Fig FIGREF7 shows a schematic representation of our proposed model It consists of the following logical steps i Develop PTSD Detection System using twitter posts of warveteransii design real surveys from the popular symptoms based mental disease assessment surveys iii define single category and create PTSD Linguistic Dictionary for each survey question and multiple aspectwords for each question iv calculate alpha scores for each category and dimension based on linguistic inquiry and word count as well as the aspectswords based dictionary v calculate scaling scores sscores for each dimension based on the alpha scores and sscores of each category based on the sscores of its dimensions vi rank features according to the contributions of achieving separation among categories associated with different alpha scores and sscores and select feature sets that minimize the overlap among categories as associated with the target classifier SGD and finally vii estimate the quality of selected featuresbased classification for filling up surveys based on classified categories ie PTSD assessment which is trustworthy among the psychiatry community Twitter activity based mental health assessment has been utmost importance to the Natural Language Processing NLP researchers and social media analysts for decades Several studies have turned to social media data to study mental health since it provides an unbiased collection of a persons language and behavior which has been shown to be useful in diagnosing conditions BIBREF9 used ngram language model CLM based sscore measure setting up some user centric emotional word sets BIBREF10 used positive and negative PTSD data to train three classifiers i one unigram language model ULM ii one character ngram language model CLM and 3 one from the LIWC categories alpha scores and found that last one gives more accuracy than other ones BIBREF11 used two types of sscores taking the ratio of negative and positive language models Differences in language use have been observed in the personal writing of students who score highly on depression scales BIBREF2 forum posts for depression BIBREF3 self narratives for PTSD BIBREF4 BIBREF5 and chat rooms for bipolar BIBREF6 Specifically in social media differences have previously been observed between depressed and control groups as assessed by internetadministered batteries via LIWC depressed users more frequently use first person pronouns BIBREF7 and more frequently use negative emotion words and anger words on Twitter but show no differences in positive emotion word usage BIBREF8 Similarly an increase in negative emotion and first person pronouns and a decrease in third person pronouns via LIWC is observed as well as many manifestations of literature findings in the pattern of life of depressed users eg social engagement demographics BIBREF12 Differences in language use in social media via LIWC have also been observed between PTSD and control groups BIBREF13 All of the prior works used some random dictionary related to the human sentiment positivenegative word sets as category words to estimate the mental health but very few of them addressed the problem of explainability of their solution to obtain trust of clinicians Islam et al proposed an explainable topic modeling framework to rank different mental health features using Local Interpretable ModelAgnostic Explanations and visualize them to understand the features involved in mental health status classification using the BIBREF14 which fails to provide trust of clinicians due to its lack of interpretability in clinical terms In this paper we develop LAXARY model where first we start investigating clinically validated survey tools which are trustworthy methods of PTSD assessment among clinicians build our category sets based on the survey questions and use these as dictionary words in terms of first person singular number pronouns aspect for next level LIWC algorithm Finally we develop a modified LIWC algorithm to estimate survey scores similar to sentiment category scores of naive LIWC which is both explainable and trustworthy to clinicians There are many clinically validated PTSD assessment tools that are being used both to detect the prevalence of PTSD and its intensity among sufferers Among all of the tools the most popular and well accepted one is DomainSpecific RiskTaking DOSPERT Scale BIBREF15 This is a psychometric scale that assesses risk taking in five content domains financial decisions separately for investing versus gambling healthsafety recreational ethical and social decisions Respondents rate the likelihood that they would engage in domainspecific risky activities Part I An optional Part II assesses respondents perceptions of the magnitude of the risks and expected benefits of the activities judged in Part I There are more scales that are used in risky behavior analysis of individuals daily activities such as The Berlin Social Support Scales BSSS BIBREF16 and Values In Action Scale VIAS BIBREF17 Dryhootch America BIBREF18 BIBREF19 a veteran peer support community organization chooses 5 6 and 5 questions respectively from the above mentioned survey systems to assess the PTSD among war veterans and consider rest of them as irrelevant to PTSD The details of dryhootch chosen survey scale are stated in Table TABREF13 TableTABREF14 shows a sample DOSPERT scale demographic chosen by dryhootch The threshold in Table TABREF13 is used to calculate the risky behavior limits For example if one individuals weekly DOSPERT score goes over 28 he is in critical situation in terms of risk taking symptoms of PTSD Dryhootch defines the intensity of PTSD into four categories based on the weekly survey results of all three clinical survey tools DOSPERT BSSS and VIAS High risk PTSD If one individual veterans weekly PTSD assessment scores go above the threshold for all three PTSD assessment tools ie DOSPERT BSSS and VIAS then heshe is in high risk situation which needs immediate mental support to avoid catastrophic effect of individuals health or surrounding peoples life Moderate risk PTSD If one individual veterans weekly PTSD assessment scores go above the threshold for any two of the three PTSD assessment tools then heshe is in moderate risk situation which needs close observation and peer mentoring to avoid their risk progression Low risk PTSD If one individual veterans weekly PTSD assessment scores go above the threshold for any one of the three PTSD assessment tools then heshe has light symptoms of PTSD No PTSD If one individual veterans weekly PTSD assessment scores go below the threshold for all three PTSD assessment tools then heshe has no PTSD To develop an explainable model we first need to develop twitterbased PTSD detection algorithm In this section we describe the data collection and the development of our core LAXARY model We use an automated regular expression based searching to find potential veterans with PTSD in twitter and then refine the list manually First we select different keywords to search twitter users of different categories For example to search selfclaimed diagnosed PTSD sufferers we select keywords related to PTSD for example post trauma post traumatic disorder PTSD etc We use a regular expression to search for statements where the user selfidentifies as being diagnosed with PTSD For example Table TABREF27 shows a selfidentified tweet posts To search veterans we mostly visit to different twitter accounts of veterans organizations such as MA Women Veterans WomenVeterans Illinois Veterans ILVetsAffairs Veterans Benefits VAVetBenefits etc We define an inclusion criteria as follows one twitter user will be part of this study if heshe describes himselfherself as a veteran in the introduction and have at least 25 tweets in last week After choosing the initial twitter users we search for selfidentified PTSD sufferers who claim to be diagnosed with PTSD in their twitter posts We find 685 matching tweets which are manually reviewed to determine if they indicate a genuine statement of a diagnosis for PTSD Next we select the username that authored each of these tweets and retrieve last weeks tweets via the Twitter API We then filtered out users with less than 25 tweets and those whose tweets were not at least 75 in English measured using an automated language ID system This filtering left us with 305 users as positive examples We repeated this process for a group of randomly selected users We randomly selected 3000 twitter users who are veterans as per their introduction and have at least 25 tweets in last one week After filtering as above in total 2423 users remain whose tweets are used as negative examples developing a 2728 users entire weeks twitter posts where 305 users are selfclaimed PTSD sufferers We distributed Dryhootch chosen surveys among 1200 users 305 users are self claimed PTSD sufferers and rest of them are randomly chosen from previous 2423 users and received 210 successful responses Among these responses 92 users were diagnosed as PTSD by any of the three surveys and rest of the 118 users are diagnosed with NO PTSD Among the clinically diagnosed PTSD sufferers 17 of them were not selfidentified before However 7 of the selfidentified PTSD sufferers are assessed with no PTSD by PTSD assessment tools The response rates of PTSD and NO PTSD users are 27 and 12 In summary we have collected one week of tweets from 2728 veterans where 305 users claimed to have diagnosed with PTSD After distributing Dryhootch surveys we have a dataset of 210 veteran twitter users among them 92 users are assessed with PTSD and 118 users are diagnosed with no PTSD using clinically validated surveys The severity of the PTSD are estimated as Nonexistent light moderate and high PTSD based on how many surveys support the existence of PTSD among the participants according to dryhootch manual BIBREF18 BIBREF19 We download 210 users all twitter posts who are war veterans and clinically diagnosed with PTSD sufferers as well which resulted a total 12385 tweets Fig FIGREF16 shows each of the 210 veteran twitter users monthly average tweets We categorize these Tweets into two groups Tweets related to work and Tweets not related to work That is only the Tweets that use a form of the word work eg workworked working worker etc or job eg job jobs jobless etc are identified as workrelated Tweets with the remaining categorized as nonworkrelated Tweets This categorization method increases the likelihood that most Tweets in the work group are indeed talking about work or job for instance Back to work Projects are firing back up and moving ahead now that baseball is done This categorization results in 456 workrelated Tweets about 54 of all Tweets written in English and 75 unique Twitter users To conduct weeklylevel analysis we consider three categorizations of Tweets ie overall Tweets workrelated Tweets and non workrelated Tweets on a daily basis and create a text file for each week for each group We use Coppersmith proposed PTSD classification algorithm to develop our baseline blackbox model BIBREF11 We utilize our positive and negative PTSD data 92118 to train three classifiers i unigram language model ULM examining individual whole words ii character ngram language model CLM and iii LIWC based categorical models above all of the prior ones The LMs have been shown effective for Twitter classification tasks BIBREF9 and LIWC has been previously used for analysis of mental health in Twitter BIBREF10 The language models measure the probability that a word ULM or a string of characters CLM was generated by the same underlying process as the training data We first train one of each language model clm and ulm from the tweets of PTSD users and another model clm and ulm from the tweets from No PTSD users Each test tweet t is scored by comparing probabilities from each LM called sscore A threshold of 1 for sscore divides scores into positive and negative classes In a multiclass setting the algorithm minimizes the cross entropy selecting the model with the highest probability For each user we calculate the proportion of tweets scored positively by each LIWC category These proportions are used as a feature vector in a loglinear regression model BIBREF20 Prior to training we preprocess the text of each tweet we replace all usernames with a single token USER lowercase all text and remove extraneous whitespace We also exclude any tweet that contained a URL as these often pertain to events external to the user We conduct a LIWC analysis of the PTSD and nonPTSD tweets to determine if there are differences in the language usage of PTSD users We applied the LIWC battery and examined the distribution of words in their language Each tweet was tokenized by separating on whitespace For each user for a subset of the LIWC categories we measured the proportion of tweets that contained at least one word from that category Specifically we examined the following nine categories first second and third person pronouns swear anger positive emotion negative emotion death and anxiety words Second person pronouns were used significantly less often by PTSD users while third person pronouns and words about anxiety were used significantly more often The heart of LAXARY framework is the construction of PTSD Linguistic Dictionary Prior works show that linguistic dictionary based text analysis has been much effective in twitter based sentiment analysis BIBREF21 BIBREF22 Our work is the first of its kind that develops its own linguistic dictionary to explain automatic PTSD assessment to confirm trustworthiness to clinicians We use LIWC developed WordStat dictionary format for our text analysis BIBREF23 The LIWC application relies on an internal default dictionary that defines which words should be counted in the target text files To avoid confusion in the subsequent discussion text words that are read and analyzed by WordStat are referred to as target words Words in the WordStat dictionary file will be referred to as dictionary words Groups of dictionary words that tap a particular domain eg negative emotion words are variously referred to as subdictionaries or word categories Fig FIGREF8 is a sample WordStat dictionary There are several steps to use this dictionary which are stated as follows Pronoun selection At first we have to define the pronouns of the target sentiment Here we used first person singular number pronouns ie I me mine etc that means we only count those sentences or segments which are only related to first person singular number ie related to the person himself Category selection We have to define the categories of each word set thus we can analyze the categories as well as dimensions text analysis scores We chose three categories based on the three different surveys 1 DOSPERT scale 2 BSSS scale and 3 VIAS scale Dimension selection We have to define the word sets also called dimension for each category We chose one dimension for each of the questions under each category to reflect real survey system evaluation Our chosen categories are state in Fig FIGREF20 Score calculation alpha score alpha scores refer to the Cronbachs alphas for the internal reliability of the specific words within each category The binary alphas are computed on the ratio of occurrence and nonoccurrence of each dictionary word whereas the raw or uncorrected alphas are based on the percentage of use of each of the category words within texts After the PTSD Linguistic Dictionary has been created we empirically evaluate its psychometric properties such as reliability and validity as per American Standards for educational and psychological testing guideline BIBREF24 In psychometrics reliability is most commonly evaluated by Cronbachs alpha which assesses internal consistency based on intercorrelations and the number of measured items In the text analysis scenario each word in our PTSD Linguistic dictionary is considered an item and reliability is calculated based on each text files response to each word item which forms an Nnumber of text files times Jnumber of words or stems in a dictionary data matrix There are two ways to quantify such responses using percentage data uncorrected method or using present or not data binary method BIBREF23 For the uncorrected method the data matrix comprises percentage values of each wordstem are calculated from each text file For the binary method the data matrix quantifies whether or not a word was used in a text file where 1 represents yes and 0 represents no Once the data matrix is created it is used to calculate Cronbachs alpha based on its intercorrelation matrix among the word percentages We assess reliability based on our selected 210 users Tweets which further generated a 23562 response matrix after running the PTSD Linguistic Dictionary for each user The response matrix yields reliability of 89 based on the uncorrected method and 96 based on the binary method which confirm the high reliability of our PTSD Dictionary created PTSD survey based categories After assessing the reliability of the PTSD Linguistic dictionary we focus on the two most common forms of construct validity convergent validity and discriminant validity BIBREF25 Convergent validity provides evidence that two measures designed to assess the same construct are indeed related discriminate validity involves evidence that two measures designed to assess different constructs are not too strongly related In theory we expect that the PTSD Linguistic dictionary should be positively correlated with other negative PTSD constructs to show convergent validity and not strongly correlated with positive PTSD constructs to show discriminant validity To test these two types of validity we use the same 210 users tweets used for the reliability assessment The results revealed that the PTSD Linguistic dictionary is indeed positively correlated with negative construct dictionaries including the overall negative PTSD dictionary r3664p001 Table TABREF25 shows all 16 categorical dictionaries These results provide strong support for the measurement validity for our newly created PTSD Linguistic dictionary We use the exact similar method of LIWC to extract alpha scores for each dimension and categories except we use our generated PTSD Linguistic Dictionary for the task BIBREF23 Thus we have total 16 alpha scores in total Meanwhile we propose a new type of feature in this regard which we called scalingscore sscore sscore is calculated from alpha scores The purpose of using sscore is to put exact scores of each of the dimension and category thus we can apply the same method used in real weekly survey system The idea is we divide each category into their corresponding scale factor ie for DOSPERT scale BSSS scale and VIAS scales and divide them into 8 3 and 5 scaling factors which are used in real survey system Then we set the sscore from the scaling factors from the alpha scores of the corresponding dimension of the questions The algorithm is stated in Figure FIGREF23 Following Fig FIGREF23 we calculate the sscore for each dimension Then we add up all the sscore of the dimensions to calculate cumulative sscore of particular categories which is displayed in Fig FIGREF22 Finally we have total 32 features among them 16 are alpha scores and 16 are sscores for each category ie each question We add both of alpha and s scores together and scale according to their corresponding survey score scales using minmax standardization Then the final output is a 16 valued matrix which represent the score for each questions from three different Dryhootch surveys We use the output to fill up each survey estimate the prevalence of PTSD and its intensity based on each tools respective evaluation metric To validate the performance of LAXARY framework we first divide the entire 210 users twitter posts into training and test dataset Then we first developed PTSD Linguistic Dictionary from the twitter posts from training dataset and apply LAXARY framework on test dataset To provide an initial results we take 50 of users last weeks the week they responded of having PTSD data to develop PTSD Linguistic dictionary and apply LAXARY framework to fill up surveys on rest of 50 dataset The distribution of this trainingtest dataset segmentation followed a 50 distribution of PTSD and No PTSD from the original dataset Our final survey based classification results showed an accuracy of 96 in detecting PTSD and mean squared error of 12 in estimating its intensity given we have four intensity No PTSD Low Risk PTSD Moderate Risk PTSD and High Risk PTSD with a score of 0 1 2 and 3 respectively Table TABREF29 shows the classification details of our experiment which provide the very good accuracy of our classification To compare the outperformance of our method we also implemented Coppersmith et al proposed method and achieved an 86 overall accuracy of detecting PTSD users BIBREF11 following the same trainingtest dataset distribution Fig FIGREF28 illustrates the comparisons between LAXARY and Coppersmith et al proposed method Here we can see the outperformance of our proposed method as well as the importance of sscore estimation We also illustrates the importance of alpha score and Sscore in Fig FIGREF30 Fig FIGREF30 illustrates that if we change the number of training samples LAXARY models outperforms Coppersmith et al proposed model under any condition In terms of intensity Coppersmith et al totally fails to provide any idea however LAXARY provides extremely accurate measures of intensity estimation for PTSD sufferers as shown in Fig FIGREF31 which can be explained simply providing LAXARY model filled out survey details Table TABREF29 shows the details of accuracies of both PTSD detection and intensity estimation Fig FIGREF32 shows the classification accuracy changes over the training sample sizes for each survey which shows that DOSPERT scale outperform other surveys Fig FIGREF33 shows that if we take previous weeks instead of only the week diagnosis of PTSD was taken there are no significant patterns of PTSD detection LAXARY is a highly ambitious model that targets to fill up clinically validated survey tools using only twitter posts Unlike the previous twitter based mental health assessment tools LAXARY provides a clinically interpretable model which can provide better classification accuracy and intensity of PTSD assessment and can easily obtain the trust of clinicians The central challenge of LAXARY is to search twitter users from twitter search engine and manually label them for analysis While developing PTSD Linguistic Dictionary although we followed exactly same development idea of LIWC WordStat dictionary and tested reliability and validity our dictionary was not still validated by domain experts as PTSD detection is highly sensitive issue than stressdepression detection Moreover given the extreme challenges of searching veterans in twitter using our selection and inclusion criteria it was extremely difficult to manually find the evidence of the selfclaimed PTSD sufferers Although we have shown extremely promising initial findings about the representation of a blackbox model into clinically trusted tools using only 210 users data is not enough to come up with a trustworthy model Moreover more clinical validation must be done in future with real clinicians to firmly validate LAXARY model provided PTSD assessment outcomes In future we aim to collect more data and run not only nationwide but also internationalwide data collection to establish our innovation into a real tool Apart from that as we achieved promising results in detecting PTSD and its intensity using only twitter data we aim to develop Linguistic Dictionary for other mental health issues too Moreover we will apply our proposed method in other types of mental illness such as depression bipolar disorder suicidal ideation and seasonal affective disorder SAD etc As we know accuracy of particular social media analysis depends on the dataset mostly We aim to collect more data engaging more researchers to establish a set of mental illness specific Linguistic Database and evaluation technique to solidify the genralizability of our proposed method To promote better comfort to the trauma patients it is really important to detect Post Traumatic Stress Disorder PTSD sufferers in time before going out of control that may result catastrophic impacts on society people around or even sufferers themselves Although psychiatrists invented several clinical diagnosis tools ie surveys by assessing symptoms signs and impairment associated with PTSD most of the times the process of diagnosis happens at the severe stage of illness which may have already caused some irreversible damages of mental health of the sufferers On the other hand due to lack of explainability existing twitter based methods are not trusted by the clinicians In this paper we proposed LAXARY a novel method of filling up PTSD assessment surveys using weekly twitter posts As the clinical surveys are trusted and understandable method we believe that this method will be able to gain trust of clinicians towards early detection of PTSD Moreover our proposed LAXARY model which is first of its kind can be used to develop any type of mental disorder Linguistic Dictionary providing a generalized and trustworthy mental health assessment framework of any kind </s>
<s>[INST] Comprehensive Named Entity Recognition on CORD-19 with Distant or Weak Supervision [/INST] Coronavirus disease 2019 COVID19 is an infectious disease caused by severe acute respiratory syndrome coronavirus 2 SARSCoV2 The disease was first identified in 2019 in Wuhan Central China and has since spread globally resulting in the 20192020 coronavirus pandemic On March 16th 2020 researchers and leaders from the Allen Institute for AI Chan Zuckerberg Initiative CZI Georgetown Universitys Center for Security and Emerging Technology CSET Microsoft and the National Library of Medicine NLM at the National Institutes of Health released the COVID19 Open Research Dataset CORD19 of scholarly literature about COVID19 SARSCoV2 and the coronavirus group Named entity recognition NER is a fundamental step in text mining system development to facilitate the COVID19 studies There is critical need for NER methods that can quickly adapt to all the COVID19 related new types without much human effort for training data annotation We created this CORD19NER dataset with comprehensive named entity annotation on the CORD19 corpus 20200313 This dataset covers 75 finegrained named entity types CORD19NER is automatically generated by combining the annotation results from four sources In the following sections we introduce the details of CORD19NER dataset construction We also show some NER annotation results in this dataset The corpus is generated from the 29500 documents in the CORD19 corpus 20200313 We first merge all the metadata allsourcesmetadata20200313csv with their corresponding fulltext papers Then we create a tokenized corpus CORD19corpusjson for further NER annotations Corpus Tokenization The raw corpus is a combination of the title abstract and fulltext from the CORD19 corpus We first conduct automatic phrase mining on the raw corpus using AutoPhrase BIBREF0 Then we do the second round of tokenization with Spacy on the phrasereplaced corpus We have observed that keeping the AutoPhrase results will significantly improve the distantly and weaklysupervised NER performance Key Items The tokenized corpus includes the following items docid the line number 029499 in allsourcesmetadata20200313csv in the CORD19 corpus 20200313 sents sentid senttokens tokenized sentences and words as described above source CZI 1236 records PMC 27337 bioRxiv 566 and medRxiv 361 doi populated for all BioRxivMedRxiv paper records and most of the other records 26357 non null pmcid populated for all PMC paper records 27337 non null pubmedid populated for some of the records Other keys publishtime authors and journal The tokenized corpus CORD19corpusjson with the file schema and detailed descriptions can be found in our CORD19NER dataset CORD19NER annotation is a combination from four sources with different NER methods Pretrained NER on 18 general entity types from Spacy using the model encorewebsm Pretrained NER on 18 biomedical entity types from SciSpacy using the model ennerbionlp13cgmd Knowledge base KBguided NER on 127 biomedical entity types with our distantlysupervised NER methods BIBREF1 BIBREF2 We do not require any human annotated training data for the NER model training Instead We rely on UMLS as the input KB for distant supervision Seedguided NER on 9 new entity types specifically related to the COVID19 studies with our weaklysupervised NER method We only require several 1020 humaninput seed entities for each new type Then we expand the seed entity sets with CatE BIBREF3 and apply our distant NER method for the new entity type recognition The 9 new entity types with examples of their input seed are as follows Coronavirus COVID19 SARS MERS etc Viral Protein Hemagglutinin GP120 etc Livestock cattle sheep pig etc Wildlife bats wild animals wild birds etc Evolution genetic drift natural selection mutation rate etc Physical Science atomic charge Amber force fields Van der Waals interactions etc Substrate blood sputum urine etc Material copper stainless steel plastic etc Immune Response adaptive immune response cell mediated immunity innate immunity etc We merged all the entity types from the four sources and reorganized them into one entity type hierarchy Specifically we align all the types from SciSpacy to UMLS We also merge some finegrained UMLS entity types to their more coarsegrained types based on the corpus count Then we get a final entity type hierarchy with 75 finegrained entity types used in our annotations The entity type hierarchy CORD19typesxlsx can be found in our CORD19NER dataset Then we conduct named entity annotation with the four NER methods on the 75 finegrained entity types After we get the NER annotation results with the four different methods we merge the results into one file The conflicts are resolved by giving priority to different entity types annotated by different methods according to their annotation quality The final entity annotation results CORD19nerjson with the file schema and detailed descriptions can be found in our CORD19NER dataset In Figure FIGREF28 we show some examples of the annotation results in CORD19NER We can see that our distantly or weakly supervised methods achieve high quality recognizing the new entity types requiring only several seed examples as the input For example we recognized SARSCoV2 as the CORONAVIRUS type bat and pangolins as the WILDLIFE type and Van der Waals forces as the PHYSICALSCIENCE type This NER annotation results help downstream text mining tasks in discovering the origin and the physical nature of the virus Our NER methods are domainindependent that can be applied to corpus in different domains In addition we show another example of NER annotation on New York Times with our system in Figure FIGREF29 In Figure FIGREF30 we show the comparison of our annotation results with existing NERBioNER systems In Figure FIGREF30 we can see that only our method can identify SARSCoV2 as a coronavirus In Figure FIGREF30 we can see that our method can identify many more entities such as pylogenetic as a evolution term and bat as a wildlife In Figure FIGREF30 we can also see that our method can identify many more entities such as racism as a social behavior In summary our distantly and weaklysupervised NER methods are reliable for highquality entity recognition without requiring human effort for training data annotation In Table TABREF34 we show some examples of the most frequent entities in the annotated corpus Specifically we show the entity types including both our new types and some UMLS types that have not been manually annotated before We find our annotated entities very informative for the COVID19 studies For example the most frequent entities for the type SIGNORSYMPTOM behavior includes cough and respiratory symptoms that are the most common symptoms for COVID19 The most frequent entities for the type INDIVIDUALBEHAVIOR includes hand hygiene disclosures and absenteeism which indicates that people focus more on hand cleaning for the COVID19 issue Also the most frequent entities for the type MACHINEACTIVITY includes machine learning data processing and automation which indicates that people focus more on the automated methods that can process massive data for the COVID19 studies This type also includes telecommunication as the top results which is quite reasonable under the current COVID19 situation More examples can be found in our dataset In the future we will further improve the CORD19NER dataset quality We will also build text mining systems based on the CORD19NER dataset with richer functionalities We hope this dataset can help the text mining community build downstream applications We also hope this dataset can bring insights for the COVID19 studies both on the biomedical side and on the social side Research was sponsored in part by US DARPA KAIROS Program No FA87501921004 and SocialSim Program No W911NF17C0099 National Science Foundation IIS 1618481 IIS 1704532 and IIS1741317 and DTRA HDTRA11810026 Any opinions findings and conclusions or recommendations expressed herein are those of the authors and should not be interpreted as necessarily representing the views either expressed or implied of DARPA or the US Government The US Government is authorized to reproduce and distribute reprints for government purposes notwithstanding any copyright annotation hereon The views and conclusions contained in this paper are those of the authors and should not be interpreted as representing any funding agencies </s>
<s>[INST] UniSent: Universal Adaptable Sentiment Lexica for 1000+ Languages [/INST] Sentiment classification is an important task which requires either word level or document level sentiment annotations Such resources are available for at most 136 languages BIBREF0 preventing accurate sentiment classification in a low resource setup Recent research efforts on crosslingual transfer learning enable to train models in high resource languages and transfer this information into other low resource languages using minimal bilingual supervision BIBREF1 BIBREF2 BIBREF3 Besides that little effort has been spent on the creation of sentiment lexica for low resource languages eg BIBREF0 BIBREF4 BIBREF5 We create and release Unisent the first massively crosslingual sentiment lexicon in more than 1000 languages An extensive evaluation across several languages shows that the quality of Unisent is close to manually created resources Our method is inspired by BIBREF6 with a novel combination of vocabulary expansion and domain adaptation using embedding spaces Similar to our work BIBREF7 also use massively parallel corpora to project POS tags and dependency relations across languages However their approach is based on assignment of the most probable label according to the alignment model from the source to the target language and does not include any vocabulary expansion or domain adaptation and do not use the embedding graphs Our method Adapted Sentiment Pivot requires a sentiment lexicon in one language eg English as well as a massively parallel corpus Following steps are performed on this input Our goal is to evaluate the quality of UniSent against several manually created sentiment lexica in different domains to ensure its quality for the low resource languages We do this in several steps As the gold standard sentiment lexica we chose manually created lexicon in Czech BIBREF11 German BIBREF12 French BIBREF13 Macedonian BIBREF14 and Spanish BIBREF15 These lexica contain general domain words as opposed to Twitter or Bible As gold standard for twitter domain we use emoticon dataset and perform emoticon sentiment prediction BIBREF16 BIBREF17 We use the manually created English sentiment lexicon WKWSCI in BIBREF18 as a resource to be projected over languages For the projection step Section SECREF1 we use the massively parallel Bible corpus in BIBREF8 We then propagate the projected sentiment polarities to all words in the Wikipedia corpus We chose Wikipedia here because its domain is closest to the manually annotated sentiment lexica we use to evaluate UniSent In the adaptation step we compute the shift between the vocabularies in the Bible and Wikipedia corpora To show that our adaptation method also works well on domains like Twitter we propose a second evaluation in which we use Adapted Sentiment Pivot to predict the sentiment of emoticons in Twitter To create our test sets we first split UniSent and our gold standard lexica as illustrated in Figure FIGREF11 We then form our training and test sets as follows i UniSentLexicon we use words in UniSent for the sentiment learning in the target domain for this purpose we use words INLINEFORM0 ii BaselineLexicon we use words in the gold standard lexicon for the sentiment learning in the target domain for this purpose we use words INLINEFORM0 iii EvaluationLexicon we randomly exclude a set of words the baselinelexicon INLINEFORM0 In selection of the sampling size we make sure that INLINEFORM1 and INLINEFORM2 would contain a comparable number of words In Table TABREF13 we compare the quality of UniSent with the BaselineLexicon as well as with the gold standard lexicon for general domain data The results show that i UniSent clearly outperforms the baseline for all languages ii the quality of UniSent is close to manually annotated data iii the domain adaptation method brings small improvements for morphologically poor languages The modest gains could be because our drift weighting method Section SECREF3 mainly models a sense shift between words which is not always equivalent to a polarity shift In Table TABREF14 we compare the quality of UniSent with the gold standard emoticon lexicon in the Twitter domain The results show that i UniSent clearly outperforms the baseline and ii our domain adaptation technique brings small improvements for French and Spanish Using our novel Adapted Sentiment Pivot method we created UniSent a sentiment lexicon covering over 1000 including many lowresource languages in several domains The only necessary resources to create UniSent are a sentiment lexicon in any language and a massively parallel corpus that can be small and domain specific Our evaluation showed that the quality of UniSent is closed to manually annotated resources </s>
<s>[INST] Word Sense Disambiguation for 158 Languages using Word Embeddings Only [/INST] 11em 111em 1111em rurussian 1Skolkovo Institute of Science and Technology Moscow Russia vlogachevaskoltechru 2Ural Federal University Yekaterinburg Russia 3Universitt Hamburg Hamburg Germany 4Universitt Mannheim Mannheim Germany 5University of Oslo Oslo Norway 6Higher School of Economics Moscow Russia Disambiguation of word senses in context is easy for humans but is a major challenge for automatic approaches Sophisticated supervised and knowledgebased models were developed to solve this task However i the inherent Zipfian distribution of supervised training instances for a given word andor ii the quality of linguistic knowledge representations motivate the development of completely unsupervised and knowledgefree approaches to word sense disambiguation WSD They are particularly useful for underresourced languages which do not have any resources for building either supervised andor knowledgebased models In this paper we present a method that takes as input a standard pretrained word embedding model and induces a fullyfledged word sense inventory which can be used for disambiguation in context We use this method to induce a collection of sense inventories for 158 languages on the basis of the original pretrained fastText word embeddings by Grave18 enabling WSD in these languages Models and system are available online word sense induction word sense disambiguation word embeddings sense embeddings graph clustering There are many polysemous words in virtually any language If not treated as such they can hamper the performance of all semantic NLP tasks BIBREF0 Therefore the task of resolving the polysemy and choosing the most appropriate meaning of a word in context has been an important NLP task for a long time It is usually referred to as Word Sense Disambiguation WSD and aims at assigning meaning to a word in context The majority of approaches to WSD are based on the use of knowledge bases taxonomies and other external manually built resources BIBREF1 BIBREF2 However different senses of a polysemous word occur in very diverse contexts and can potentially be discriminated with their help The fact that semantically related words occur in similar contexts and diverse words do not share common contexts is known as distributional hypothesis and underlies the technique of constructing word embeddings from unlabelled texts The same intuition can be used to discriminate between different senses of individual words There exist methods of training word embeddings that can detect polysemous words and assign them different vectors depending on their contexts BIBREF3 BIBREF4 Unfortunately many widespread word embedding models such as GloVe BIBREF5 word2vec BIBREF6 fastText BIBREF7 do not handle polysemous words Words in these models are represented with single vectors which were constructed from diverse sets of contexts corresponding to different senses In such cases their disambiguation needs knowledgerich approaches We tackle this problem by suggesting a method of posthoc unsupervised WSD It does not require any external knowledge and can separate different senses of a polysemous word using only the information encoded in pretrained word embeddings We construct a semantic similarity graph for words and partition it into densely connected subgraphs This partition allows for separating different senses of polysemous words Thus the only language resource we need is a large unlabelled text corpus used to train embeddings This makes our method applicable to underresourced languages Moreover while other methods of unsupervised WSD need to train embeddings from scratch we perform retrofitting of sense vectors based on existing word embeddings We create a massively multilingual application for onthefly word sense disambiguation When receiving a text the system identifies its language and performs disambiguation of all the polysemous words in it based on preextracted word sense inventories The system works for 158 languages for which pretrained fastText embeddings available BIBREF8 The created inventories are based on these embeddings To the best of our knowledge our system is the only WSD system for the majority of the presented languages Although it does not match the state of the art for resourcerich languages it is fully unsupervised and can be used for virtually any language The contributions of our work are the following noitemsep We release word sense inventories associated with fastText embeddings for 158 languages We release a system that allows onthefly word sense disambiguation for 158 languages We present egvi EgoGraph Vector Induction a new algorithm of unsupervised word sense induction which creates sense inventories based on pretrained word vectors There are two main scenarios for WSD the supervised approach that leverages training corpora explicitly labelled for word sense and the knowledgebased approach that derives sense representation from lexical resources such as WordNet BIBREF9 In the supervised case WSD can be treated as a classification problem Knowledgebased approaches construct sense embeddings ie embeddings that separate various word senses SupWSD BIBREF10 is a stateoftheart system for supervised WSD It makes use of linear classifiers and a number of features such as POS tags surrounding words local collocations word embeddings and syntactic relations GlossBERT model BIBREF11 which is another implementation of supervised WSD achieves a significant improvement by leveraging gloss information This model benefits from sentencepair classification approach introduced by Devlin19 in their BERT contextualized embedding model The input to the model consists of a context a sentence which contains an ambiguous word and a gloss sense definition from WordNet The contextgloss pair is concatenated through a special token SEP and classified as positive or negative On the other hand sense embeddings are an alternative to traditional word vector models such as word2vec fastText or GloVe which represent monosemous words well but fail for ambiguous words Sense embeddings represent individual senses of polysemous words as separate vectors They can be linked to an explicit inventory BIBREF12 or induce a sense inventory from unlabelled data BIBREF13 LSTMEmbed BIBREF13 aims at learning sense embeddings linked to BabelNet BIBREF14 at the same time handling word ordering and using pretrained embeddings as an objective Although it was tested only on English the approach can be easily adapted to other languages present in BabelNet However manually labelled datasets as well as knowledge bases exist only for a small number of wellresourced languages Thus to disambiguate polysemous words in other languages one has to resort to fully unsupervised techniques The task of Word Sense Induction WSI can be seen as an unsupervised version of WSD WSI aims at clustering word senses and does not require to map each cluster to a predefined sense Instead of that word sense inventories are induced automatically from the clusters treating each cluster as a single sense of a word WSI approaches fall into three main groups context clustering word egonetwork clustering and synonyms or substitute clustering Context clustering approaches consist in creating vectors which characterise words contexts and clustering these vectors Here the definition of context may vary from windowbased context to latent topicalike context Afterwards the resulting clusters are either used as senses directly BIBREF15 or employed further to learn sense embeddings via Chinese Restaurant Process algorithm BIBREF16 AdaGram a Bayesian extension of the SkipGram model BIBREF17 AutoSense an extension of the LDA topic model BIBREF18 and other techniques Word egonetwork clustering is applied to semantic graphs The nodes of a semantic graph are words and edges between them denote semantic relatedness which is usually evaluated with cosine similarity of the corresponding embeddings BIBREF19 or by PMIlike measures BIBREF20 Word senses are induced via graph clustering algorithms such as Chinese Whispers BIBREF21 or MaxMax BIBREF22 The technique suggested in our work belongs to this class of methods and is an extension of the method presented by Pelevina16 Synonyms and substitute clustering approaches create vectors which represent synonyms or substitutes of polysemous words Such vectors are created using synonymy dictionaries BIBREF23 or contextdependent substitutes obtained from a language model BIBREF24 Analogously to previously described techniques word senses are induced by clustering these vectors The majority of word vector models do not discriminate between multiple senses of individual words However a polysemous word can be identified via manual analysis of its nearest neighboursthey reflect different senses of the word Table TABREF7 shows manually senselabelled most similar terms to the word Ruby according to the pretrained fastText model BIBREF8 As it was suggested early by Widdows02 the distributional properties of a word can be used to construct a graph of words that are semantically related to it and if a word is polysemous such graph can easily be partitioned into a number of densely connected subgraphs corresponding to different senses of this word Our algorithm is based on the same principle SenseGram is the method proposed by Pelevina16 that separates nearest neighbours to induce word senses and constructs sense embeddings for each sense It starts by constructing an egograph semantic graph centred at a particular word of the word and its nearest neighbours The edges between the words denote their semantic relatedness eg the two nodes are joined with an edge if cosine similarity of the corresponding embeddings is higher than a predefined threshold The resulting graph can be clustered into subgraphs which correspond to senses of the word The sense vectors are then constructed by averaging embeddings of words in each resulting cluster In order to use these sense vectors for word sense disambiguation in text the authors compute the probabilities of sense vectors of a word given its context or the similarity of the sense vectors to the context One of the downsides of the described above algorithm is noise in the generated graph namely unrelated words and wrong connections They hamper the separation of the graph Another weak point is the imbalance in the nearest neighbour list when a large part of it is attributed to the most frequent sense not sufficiently representing the other senses This can lead to construction of incorrect sense vectors We suggest a more advanced procedure of graph construction that uses the interpretability of vector addition and subtraction operations in word embedding space BIBREF6 while the previous algorithm only relies on the list of nearest neighbours in word embedding space The key innovation of our algorithm is the use of vector subtraction to find pairs of most dissimilar graph nodes and construct the graph only from the nodes included in such antiedges Thus our algorithm is based on graphbased word sense induction but it also relies on vectorbased operations between word embeddings to perform filtering of graph nodes Analogously to the work of Pelevina16 we construct a semantic relatedness graph from a list of nearest neighbours but we filter this list using the following procedure Extract a list mathcal N w1 w2 wN of N nearest neighbours for the target ego word vector w Compute a list Delta delta 1 delta 2 delta N for each wi in mathcal N where delta iwwi The vectors in delta contain the components of sense of w which are not related to the corresponding nearest neighbours from mathcal N Compute a list overlinemathcal N overlinew1 overlinew2 overlinewN such that overlinewi is in the top nearest neighbours of delta i in the embedding space In other words overlinewi is a word which is the most similar to the target ego word w and least similar to its neighbour wi We refer to overlinewi as an antipair of wi The set of N nearest neighbours and their antipairs form a set of antiedges ie pairs of most dissimilar nodes those which should not be connected overlineE lbrace w1overlinew1 w2overlinew2 wNoverlinewNrbrace To clarify this consider the target ego word w textit python its top similar term w1 textit Java and the resulting antipair overlinewi textit snake which is the top related term of delta 1 w w1 Together they form an antiedge wioverlinewitextit Java textit snake composed of a pair of semantically dissimilar terms Construct V the set of vertices of our semantic graph GVE from the list of antiedges overlineE with the following recurrent procedure V V cup lbrace wi overlinewi wi in mathcal N overlinewi in mathcal Nrbrace ie we add a word from the list of nearest neighbours and its antipair only if both of them are nearest neighbours of the original word w We do not add ws nearest neighbours if their antipairs do not belong to mathcal N Thus we add only words which can help discriminating between different senses of w Construct the set of edges E as follows For each wiin mathcal N we extract a set of its K nearest neighbours mathcal Nprime i lbrace u1 u2 uKrbrace and define E lbrace wi uj wiin V ujin V ujin mathcal Nprime i ujne overlinewirbrace In other words we remove edges between a word wi and its nearest neighbour uj if uj is also its antipair According to our hypothesis wi and overlinewi belong to different senses of w so they should not be connected ie we never add antiedges into E Therefore we consider any connection between them as noise and remove it Note that N the number of nearest neighbours for the target word w and K the number of nearest neighbours of wci do not have to match The difference between these parameters is the following N defines how many words will be considered for the construction of egograph On the other hand K defines the degree of relatedness between words in the egograph if K 50 then we will connect vertices w and u with an edge only if u is in the list of 50 nearest neighbours of w Increasing K increases the graph connectivity and leads to lower granularity of senses According to our hypothesis nearest neighbours of w are grouped into clusters in the vector space and each of the clusters corresponds to a sense of w The described vertices selection procedure allows picking the most representative members of these clusters which are better at discriminating between the clusters In addition to that it helps dealing with the cases when one of the clusters is overrepresented in the nearest neighbour list In this case many elements of such a cluster are not added to V because their antipairs fall outside the nearest neighbour list This also improves the quality of clustering After the graph construction the clustering is performed using the Chinese Whispers algorithm BIBREF21 This is a bottomup clustering procedure that does not require to predefine the number of clusters so it can correctly process polysemous words with varying numbers of senses as well as unambiguous words Figure FIGREF17 shows an example of the resulting pruned graph of for the word Ruby for N 50 nearest neighbours in terms of the fastText cosine similarity In contrast to the baseline method by BIBREF19 where all 50 terms are clustered in the method presented in this section we sparsify the graph by removing 13 nodes which were not in the set of the antiedges ie pairs of most dissimilar terms out of these 50 neighbours Examples of antiedges ie pairs of most dissimilar terms for this graph include Haskell Sapphire Garnet Rails Opal Rubyist Hazel RubyOnRails and Coffeescript Opal We label each word cluster representing a sense to make them and the WSD results interpretable by humans Prior systems used hypernyms to label the clusters BIBREF25 BIBREF26 eg animal in the python animal However neither hypernyms nor rules for their automatic extraction are available for all 158 languages Therefore we use a simpler method to select a keyword which would help to interpret each cluster For each graph node v in V we count the number of antiedges it belongs to countv lbrace wioverlinewi wioverlinewi in overlineE wedge v wi vee v overlinewi rbrace A graph clustering yields a partition of V into n clusters Vlbrace V1 V2 Vnrbrace For each cluster Vi we define a keyword wkeyi as the word with the largest number of antiedges countcdot among words in this cluster We use keywords defined above to obtain vector representations of senses In particular we simply use word embedding of the keyword wkeyi as a sense representation mathbf si of the target word w to avoid explicit computation of sense embeddings like in BIBREF19 Given a sentence lbrace w1 w2 wj w wj1 wnrbrace represented as a matrix of word vectors we define the context of the target word w as textbf cw dfracsum j1n wjn Then we define the most appropriate sense hats as the sense with the highest cosine similarity to the embedding of the words context We release a system for onthefly WSD for 158 languages Given textual input it identifies polysemous words and retrieves senses that are the most appropriate in the context To build word sense inventories sense vectors for 158 languages we utilised GPUaccelerated routines for search of similar vectors implemented in Faiss library BIBREF27 The search of nearest neighbours takes substantial time therefore acceleration with GPUs helps to significantly reduce the word sense construction time To further speed up the process we keep all intermediate results in memory which results in substantial RAM consumption of up to 200 Gb The construction of word senses for all of the 158 languages takes a lot of computational resources and imposes high requirements to the hardware For calculations we use in parallel 1020 nodes of the Zhores cluster BIBREF28 empowered with Nvidia Tesla V100 graphic cards For each of the languages we construct inventories based on 50 100 and 200 neighbours for 100000 most frequent words The vocabulary was limited in order to make the computation time feasible The construction of inventories for one language takes up to 10 hours with 65 hours on average Building the inventories for all languages took more than 1000 hours of GPUaccelerated computations We release the constructed sense inventories for all the available languages They contain all the necessary information for using them in the proposed WSD system or in other downstream tasks The first text preprocessing step is language identification for which we use the fastText language identification models by Bojanowski17 Then the input is tokenised For languages which use Latin Cyrillic Hebrew or Greek scripts we employ the Europarl tokeniser For Chinese we use the Stanford Word Segmenter BIBREF29 For Japanese we use Mecab BIBREF30 We tokenise Vietnamese with UETsegmenter BIBREF31 All other languages are processed with the ICU tokeniser as implemented in the PyICU project After the tokenisation the system analyses all the input words with preextracted sense inventories and defines the most appropriate sense for polysemous words Figure FIGREF19 shows the interface of the system It has a textual input form The automatically identified language of text is shown above A click on any of the words displays a prompt shown in black with the most appropriate sense of a word in the specified context and the confidence score In the given example the word Jaguar is correctly identified as a car brand This system is based on the system by Ustalov18 extending it with a backend for multiple languages language detection and sense browsing capabilities We first evaluate our converted embedding models on multilanguage lexical similarity and relatedness tasks as a sanity check to make sure the word sense induction process did not hurt the general performance of the embeddings Then we test the sense embeddings on WSD task We use the SemR11 datasets BIBREF32 which contain word pairs with manually assigned similarity scores from 0 words are not related to 10 words are fully interchangeable for 12 languages English en Arabic ar German de Spanish es Farsi fa French fr Italian it Dutch nl Portuguese pt Russian ru Swedish sv Chinese zh The task is to assign relatedness scores to these pairs so that the ranking of the pairs by this score is close to the ranking defined by the oracle score The performance is measured with Pearson correlation of the rankings Since one word can have several different senses in our setup we follow Remus18 and define the relatedness score for a pair of words as the maximum cosine similarity between any of their sense vectors We extract the sense inventories from fastText embedding vectors We set NK for all our experiments ie the number of vertices in the graph and the maximum number of vertices nearest neighbours match We conduct experiments with NK set to 50 100 and 200 For each cluster Vi we create a sense vector si by averaging vectors that belong to this cluster We rely on the methodology of BIBREF33 shifting the generated sense vector to the direction of the original word vector silambda w 1lambda dfrac1nsum uin Vi cosw ucdot u where lambda in 0 1 w is the embedding of the original word cosw u is the cosine similarity between w and u and nVi By introducing the linear combination of w and uin Vi we enforce the similarity of sense vectors to the original word important for this task In addition to that we weight u by their similarity to the original word so that more similar neighbours contribute more to the sense vector The shifting parameter lambda is set to 05 following Remus18 A fastText model is able to generate a vector for each word even if it is not represented in the vocabulary due to the use of subword information However our system cannot assemble sense vectors for outofvocabulary words for such words it returns their original fastText vector Still the coverage of the benchmark datasets by our vocabulary is at least 85 and approaches 100 for some languages so we do not have to resort to this backoff strategy very often We use the original fastText vectors as a baseline In this case we compute the relatedness scores of the two words as a cosine similarity of their vectors We compute the relatedness scores for all benchmark datasets using our sense vectors and compare them to cosine similarity scores of original fastText vectors The results vary for different languages Figure FIGREF28 shows the change in Pearson correlation score when switching from the baseline fastText embeddings to our sense vectors The new vectors significantly improve the relatedness detection for German Farsi Russian and Chinese whereas for Italian Dutch and Swedish the score slightly falls behind the baseline For other languages the performance of sense vectors is on par with regular fastText The purpose of our sense vectors is disambiguation of polysemous words Therefore we test the inventories constructed with egvi on the Task 13 of SemEval2013 Word Sense Induction BIBREF34 The task is to identify the different senses of a target word in context in a fully unsupervised manner The dataset consists of a set of polysemous words 20 nouns 20 verbs and 10 adjectives and specifies 20 to 100 contexts per word with the total of 4664 contexts drawn from the Open American National Corpus Given a set of contexts of a polysemous word the participants of the competition had to divide them into clusters by sense of the word The contexts are manually labelled with WordNet senses of the target words the gold standard clustering is generated from this labelling The task allows two setups graded WSI where participants can submit multiple senses per word and provide the probability of each sense in a particular context and nongraded WSI where a model determines a single sense for a word in context In our experiments we performed nongraded WSI We considered the most suitable sense as the one with the highest cosine similarity with embeddings of the context as described in Section SECREF9 The performance of WSI models is measured with three metrics that require mapping of sense inventories Jaccard Index Kendalls tau and WNDCG and two cluster comparison metrics Fuzzy NMI and Fuzzy BCubed We compare our model with the models that participated in the task the baseline egograph clustering model by Pelevina16 and AdaGram BIBREF17 a method that learns sense embeddings based on a Bayesian extension of the Skipgram model Besides that we provide the scores of the simple baselines originally used in the task assigning one sense to all words assigning the most frequent sense to all words and considering each context as expressing a different sense The evaluation of our model was performed using the open source contexteval tool Table TABREF31 shows the performance of these models on the SemEval dataset Due to space constraints we only report the scores of the bestperforming SemEval participants please refer to jurgensklapaftis2013semeval for the full results The performance of AdaGram and SenseGram models is reported according to Pelevina16 The table shows that the performance of egvi is similar to stateoftheart word sense disambiguation and word sense induction models In particular we can see that it outperforms SenseGram on the majority of metrics We should note that this comparison is not fully rigorous because SenseGram induces sense inventories from word2vec as opposed to fastText vectors used in our work In order to see how the separation of word contexts that we perform corresponds to actual senses of polysemous words we visualise egographs produced by our method Figure FIGREF17 shows the nearest neighbours clustering for the word Ruby which divides the graph into five senses Rubyrelated programming tools eg RubyOnRails orange cluster female names eg Josie magenta cluster gems eg Sapphire yellow cluster programming languages in general eg Haskell red cluster Besides this is typical for fastText embeddings featuring substring similarity one can observe a cluster of different spelling of the word Ruby in green Analogously the word python see Figure FIGREF35 is divided into the senses of animals eg crocodile yellow cluster programming languages eg perl5 magenta cluster and conference eg pycon red cluster In addition we show a qualitative analysis of senses of mouse and apple Table TABREF38 shows nearest neighbours of the original words separated into clusters labels for clusters were assigned manually These inventories demonstrate clear separation of different senses although it can be too finegrained For example the first and the second cluster for mouse both refer to computer mouse but the first one addresses the different types of computer mice and the second one is used in the context of mouse actions Similarly we see that iphone and macbook are separated into two clusters Interestingly fastText handles typos codeswitching and emojis by correctly associating all nonstandard variants to the word they refer and our method is able to cluster them appropriately Both inventories were produced with K200 which ensures stronger connectivity of graph However we see that this setting still produces too many clusters We computed the average numbers of clusters produced by our model with K200 for words from the word relatedness datasets and compared these numbers with the number of senses in WordNet for English and RuWordNet BIBREF35 for Russian see Table TABREF37 We can see that the number of senses extracted by our method is consistently higher than the real number of senses We also compute the average number of senses per word for all the languages and different values of K see Figure FIGREF36 The average across languages does not change much as we increase K However for larger K the average exceed the median value indicating that more languages have lower number of senses per word At the same time while at smaller K the maximum average number of senses per word does not exceed 6 larger values of K produce outliers eg English with 125 senses Notably there are no languages with an average number of senses less than 2 while numbers on English and Russian WordNets are considerably lower This confirms that our method systematically overgenerates senses The presence of outliers shows that this effect cannot be eliminated by further increasing K because the ith nearest neighbour of a word for i200 can be only remotely related to this word even if the word is rare Thus our sense clustering algorithm needs a method of merging spurious senses We present egvi a new algorithm for word sense induction based on graph clustering that is fully unsupervised and relies on graph operations between word vectors We apply this algorithm to a large collection of pretrained fastText word embeddings releasing sense inventories for 158 languages These inventories contain all the necessary information for constructing sense vectors and using them in downstream tasks The sense vectors for polysemous words can be directly retrofitted with the pretrained word embeddings and do not need any external resources As one application of these multilingual sense inventories we present a multilingual word sense disambiguation system that performs unsupervised and knowledgefree WSD for 158 languages without the use of any dictionary or senselabelled corpus The evaluation of quality of the produced sense inventories is performed on multilingual word similarity benchmarks showing that our sense vectors improve the scores compared to nondisambiguated word embeddings Therefore our system in its present state can improve WSD and downstream tasks for languages where knowledge bases taxonomies and annotated corpora are not available and supervised WSD models cannot be trained A promising direction for future work is combining distributional information from the induced sense inventories with lexical knowledge bases to improve WSD performance Besides we encourage the use of the produced word sense inventories in other downstream tasks We acknowledge the support of the Deutsche Forschungsgemeinschaft DFG foundation under the JOINT 2 and ACQuA projects Ekaterina Artemova was supported by the framework of the HSE University Basic Research Program and Russian Academic Excellence Project 5100 </s>
<s>[INST] Spoken Language Identification using ConvNets [/INST] Language Identification LI is a problem which involves classifying the language being spoken by a speaker LI systems can be used in call centers to route international calls to an operator who is fluent in that identified language BIBREF0 In speechbased assistants LI acts as the first step which chooses the corresponding grammar from a list of available languages for its further semantic analysis BIBREF1 It can also be used in multilingual voicecontrolled information retrieval systems for example Apple Siri and Amazon Alexa Over the years studies have utilized many prosodic and acoustic features to construct machine learning models for LI systems BIBREF2 Every language is composed of phonemes which are distinct unit of sounds in that language such as b of black and g of green Several prosodic and acoustic features are based on phonemes which become the underlying features on whom the performance of the statistical model depends BIBREF3 BIBREF4 If two languages have many overlapping phonemes then identifying them becomes a challenging task for a classifier For example the word cat in English kat in Dutch katze in German have different consonants but when used in a speech they all would sound quite similar Due to such drawbacks several studies have switched over to using Deep Neural Networks DNNs to harness their novel autoextraction techniques BIBREF1 BIBREF5 This work follows an implicit approach for identifying six languages with overlapping phonemes on the VoxForge BIBREF6 dataset and achieves 954 overall accuracy In previous studies BIBREF1 BIBREF7 BIBREF5 authors use logMel spectrum of a raw audio as inputs to their models One of our contributions is to enhance the performance of this approach by utilising recent techniques like Mixup augmentation of inputs and exploring the effectiveness of Attention mechanism in enhancing performance of neural network As logMel spectrum needs to be computed for each raw audio input and processing time for generating logMel spectrum increases linearly with length of audio this acts as a bottleneck for these models Hence we propose the use of raw audio waveforms as inputs to deep neural network which boosts performance by avoiding additional overhead of computing logMel spectrum for each audio Our 1DConvNet architecture autoextracts and classifies features from this raw audio input The structure of the work is as follows In Section 2 we discuss about the previous related studies in this field The model architecture for both the raw waveforms and logMel spectrogram images is discussed in Section 3 along with the a discussion on hyperparameter space exploration In Section 4 we present the experimental results Finally in Section 5 we discuss the conclusions drawn from the experiment and future work Extraction of language dependent features like prosody and phonemes was a popular approach to classify spoken languages BIBREF8 BIBREF9 BIBREF10 Following their success in speaker verification systems ivectors have also been used as features in various classification networks These approaches required significant domain knowledge BIBREF11 BIBREF9 Nowadays most of the attempts on spoken language identification rely on neural networks for meaningful feature extraction and classification BIBREF12 BIBREF13 Revay et al BIBREF5 used the ResNet50 BIBREF14 architecture for classifying languages by generating the logMel spectra of each raw audio The model uses a cyclic learning rate where learning rate increases and then decreases linearly Maximum learning rate for a cycle is set by finding the optimal learning rate using fastai BIBREF15 library The model classified six languages English French Spanish Russian Italian and German and achieving an accuracy of 890 Gazeau et al BIBREF16 in his research showed how Neural Networks Support Vector Machine and Hidden Markov Model HMM can be used to identify French English Spanish and German Dataset was prepared using voice samples from Youtube News BIBREF17and VoxForge BIBREF6 datasets Hidden Markov models convert speech into a sequence of vectors was used to capture temporal features in speech HMMs trained on VoxForge BIBREF6 dataset performed best in comparison to other models proposed by him on same VoxForge dataset They reported an accuracy of 700 Bartz et al BIBREF1 proposed two different hybrid Convolutional Recurrent Neural Networks for language identification They proposed a new architecture for extracting spatial features from logMel spectra of raw audio using CNNs and then using RNNs for capturing temporal features to identify the language This model achieved an accuracy of 910 on Youtube News Dataset BIBREF17 In their second architecture they used the Inceptionv3 BIBREF18 architecture to extract spatial features which were then used as input for bidirectional LSTMs to predict the language accurately This model achieved an accuracy of 960 on four languages which were English German French and Spanish They also trained their CNN model obtained after removing RNN from CRNN model and the Inceptionv3 on their dataset However they were not able to achieve better results achieving and reported 90 and 95 accuracies respectively Kumar et al BIBREF0 used Melfrequency cepstral coefficients MFCC Perceptual linear prediction coefficients PLP Bark Frequency Cepstral Coefficients BFCC and Revised Perceptual Linear Prediction Coefficients RPLP as features for language identification BFCC and RPLP are hybrid features derived using MFCC and PLP They used two different models based on Vector Quantization VQ with Dynamic Time Warping DTW and Gaussian Mixture Model GMM for classification These classification models were trained with different features The authors were able to show that these models worked better with hybrid features BFCC and RPLP as compared to conventional features MFCC and PLP GMM combined with RPLP features gave the most promising results and achieved an accuracy of 888 on ten languages They designed their own dataset comprising of ten languages being Dutch English French German Italian Russian Spanish Hindi Telegu and Bengali Montavon BIBREF7 generated Mel spectrogram as features for a timedelay neural network TDNN This network had twodimensional convolutional layers for feature extraction An elaborate analysis of how deep architectures outperform their shallow counterparts is presented in this reseacrch The difficulties in classifying perceptually similar languages like German and English were also put forward in this work It is mentioned that the proposed approach is less robust to new speakers present in the test dataset This method was able to achieve an accuracy of 912 on dataset comprising of 3 languages English French and German In Table TABREF1 we summarize the quantitative results of the above previous studies It includes the model basis feature description languages classified and the used dataset along with accuracy obtained The table also lists the overall results of our proposed models at the top The languages used by various authors along with their acronyms are English En Spanish Es French Fr German De Russian Ru Italian It Bengali Ben Hindi Hi and Telegu Tel Several stateoftheart results on various audio classification tasks have been obtained by using logMel spectrograms of raw audio as features BIBREF19 Convolutional Neural Networks have demonstrated an excellent performance gain in classification of these features BIBREF20 BIBREF21 against other machine learning techniques It has been shown that using attention layers with ConvNets further enhanced their performance BIBREF22 This motivated us to develop a CNNbased architecture with attention since this approach hasnt been applied to the task of language identification before Recently using raw audio waveform as features to neural networks has become a popular approach in audio classification BIBREF23 BIBREF22 Raw waveforms have several artifacts which are not effectively captured by various conventional feature extraction techniques like Mel Frequency Cepstral Coefficients MFCC Constant Q Transform CQT Fast Fourier Transform FFT etc Audio files are a sequence of spoken words hence they have temporal features tooA CNN is better at capturing spatial features only and RNNs are better at capturing temporal features as demonstrated by Bartz et al BIBREF1 using audio files Therefore we combined both of these to make a CRNN model We propose three types of models to tackle the problem with different approaches discussed as follows As an average humans voice is around 300 Hz and according to NyquistShannon sampling theorem all the useful frequencies 0300 Hz are preserved with sampling at 8 kHz therefore we sampled raw audio files from all six languages at 8 kHz The average length of audio files in this dataset was about 104 seconds and standard deviation was 23 seconds For our experiments the audio length was set to 10 seconds If the audio files were shorter than 10 second then the data was repeated and concatenated If audio files were longer then the data was truncated We applied the following design principles to all our models Every convolutional layer is always followed by an appropriate max pooling layer This helps in containing the explosion of parameters and keeps the model small and nimble Convolutional blocks are defined as an individual block with multiple pairs of one convolutional layer and one max pooling layer Each convolutional block is preceded or succeded by a convolutional layer Batch Normalization and Rectified linear unit activations were applied after each convolutional layer Batch Normalization helps speed up convergence during training of a neural network Model ends with a dense layer which acts the final output layer As the sampling rate is 8 kHz and audio length is 10 s hence the input is raw audio to the models with input size of batch size 1 80000 In Table TABREF10 we present a detailed layerbylayer illustration of the model along with its hyperparameter 10pt Tuning hyperparameters is a cumbersome process as the hyperparamter space expands exponentially with the number of parameters therefore efficient exploration is needed for any feasible study We used the random search algorithm supported by Hyperopt BIBREF24 library to randomly search for an optimal set of hyperparameters from a given parameter space In Fig FIGREF12 various hyperparameters we considered are plotted against the validation accuracy as violin plots Our observations for each hyperparameter are summarized below Number of filters in first layer We observe that having 128 filters gives better results as compared to other filter values of 32 and 64 in the first layer A higher number of filters in the first layer of network is able to preserve most of the characteristics of input Kernel Size We varied the receptive fields of convolutional layers by choosing the kernel size from among the set of 3 5 7 9 We observe that a kernel size of 9 gives better accuracy at the cost of increased computation time and larger number of parameters A large kernel size is able to capture longer patterns in its input due to bigger receptive power which results in an improved accuracy Dropout Dropout randomly turnsoff sets to 0 various individual nodes during training of the network In a deep CNN it is important that nodes do not develop a codependency amongst each other during training in order to prevent overfitting on training data BIBREF25 Dropout rate of 01 works well for our model When using a higher dropout rate the network is not able to capture the patterns in training dataset Batch Size We chose batch sizes from amongst the set 32 64 128 There is more noise while calculating error in a smaller batch size as compared to a larger one This tends to have a regularizing effect during training of the network and hence gives better results Thus batch size of 32 works best for the model Layers in Convolutional block 1 and 2 We varied the number of layers in both the convolutional blocks If the number of layers is low then the network does not have enough depth to capture patterns in the data whereas having large number of layers leads to overfitting on the data In our network two layers in the first block and one layer in the second block give optimal results LogMel spectrogram is the most commonly used method for converting audio into the image domain The audio data was again sampled at 8 kHz The input to this model was the logMel spectra We generated logMel spectrogram using the LibROSA BIBREF26 library In Table TABREF16 we present a detailed layerbylayer illustration of the model along with its hyperparameter We took some specific design choices for this model which are as follows We added residual connections with each convolutional layer Residual connections in a way makes the model selective of the contributing layers determines the optimal number of layers required for training and solves the problem of vanishing gradients Residual connections or skip connections skip training of those layers that do not contribute much in the overall outcome of model We added spatial attention BIBREF27 networks to help the model in focusing on specific regions or areas in an image Spatial attention aids learning irrespective of transformations scaling and rotation done on the input images making the model more robust and helping it to achieve better results We added Channel Attention networks so as to help the model to find interdependencies among color channels of logMel spectra It adaptively assigns importance to each color channel in a deep convolutional multichannel network In our model we apply channel and spatial attention just before feeding the input into bidirectional GRU This helps the model to focus on selected regions and at the same time find patterns among channels to better determine the language We used the random search algorithm supported by Hyperopt BIBREF24 library to randomly search for an optimal set of hyperparameters from a given parameter space In Fig FIGREF19 various hyperparameters we tuned are plotted against the validation accuracy Our observations for each hyperparameter are summarized below Filter Size 64 filters in the first layer of network can preserve most of the characteristics of input but increasing it to 128 is inefficient as overfitting occurs Kernel Size There is a tradeoff between kernel size and capturing complex nonlinear features Using a small kernel size will require more layers to capture features whereas using a large kernel size will require less layers Large kernels capture simple nonlinear features whereas using a smaller kernel will help us capture more complex nonlinear features However with more layers backpropagation necessitates the need for a large memory We experimented with large kernel size and gradually increased the layers in order to capture more complex features The results are not conclusive and thus we chose kernel size of 7 against 3 Dropout Dropout rate of 01 works well for our data When using a higher dropout rate the network is not able to capture the patterns in training dataset Batch Size There is always a tradeoff between batch size and getting accurate gradients Using a large batch size helps the model to get more accurate gradients since the model tries to optimize gradients over a large set of images We found that using a batch size of 128 helped the model to train faster and get better results than using a batch size less than 128 Number of hidden units in bidirectional GRU Varying the number of hidden units and layers in GRU helps the model to capture temporal features which can play a significant role in identifying the language correctly The optimal number of hidden units and layers depends on the complexity of the dataset Using less number of hidden units may capture less features whereas using large number of hidden units may be computationally expensive In our case we found that using 1536 hidden units in a single bidirectional GRU layer leads to the best result Image Size We experimented with logMel spectra images of sizes 64 times 64 and 128 times 128 pixels and found that our model worked best with images of size of 128 times 128 pixels We also evaluated our model on data with mixup augmentation BIBREF28 It is a data augmentation technique that also acts as a regularization technique and prevents overfitting Instead of directly taking images from the training dataset as input mixup takes a linear combination of any two random images and feeds it as input The following equations were used to prepared a mixedup dataset and where alpha in 0 1 is a random variable from a beta distribution I1 This model is a similar model to 2DConvNet with Attention and bidirectional GRU described in section SECREF13 except that it lacks skip connections attention layers bidirectional GRU and the embedding layer incorporated in the previous model We classified six languages English French German Spanish Russian and Italian from the VoxForge BIBREF6 dataset VoxForge is an opensource speech corpus which primarily consists of samples recorded and submitted by users using their own microphone This results in significant variation of speech quality between samples making it more representative of real world scenarios Our dataset consists of 1500 samples for each of six languages Out of 1500 samples for each language 1200 were randomly selected as training dataset for that language and rest 300 as validation dataset using kfold crossvalidation To sum up we trained our model on 7200 samples and validated it on 1800 samples comprising six languages The results are discussed in next section This paper discusses two endtoend approaches which achieve stateoftheart results in both the image as well as audio domain on the VoxForge dataset BIBREF6 In Table TABREF25 we present all the classification accuracies of the two models of the cases with and without mixup for six and four languages In the audio domain using raw audio waveform as input 1DConvNet achieved a mean accuracy of 937 with a standard deviation of 03 on running kfold cross validation In Fig FIGREF27 a we present the confusion matrix for the 1DConvNet model In the image domain obtained by taking logMel spectra of raw audio 2DConvNet with 2D attention channel and spatial attention and bidirectional GRU achieved a mean accuracy of 950 with a standard deviation of 12 for six languages This model performed better when mixup regularization was applied 2DConvNet achieved a mean accuracy of 954 with standard deviation of 06 on running kfold cross validation for six languages when mixup was applied In Fig FIGREF27 b we present the confusion matrix for the 2DConvNet model 2D attention models focused on the important features extracted by convolutional layers and bidirectional GRU captured the temporal features Several of the spoken languages in Europe belong to the IndoEuropean family Within this family the languages are divided into three phyla which are Romance Germanic and Slavic Of the 6 languages that we selected Spanish Es French Fr and Italian It belong to the Romance phyla English and German belong to Germanic phyla and Russian in Slavic phyla Our model also confuses between languages belonging to the similar phyla which acts as an insanity check since languages in same phyla have many similar pronounced words such as cat in English becomes Katze in German and Ciao in Italian becomes Chao in Spanish Our model confuses between French Fr and Russian Ru while these languages belong to different phyla many words from French were adopted into Russian such as automate ootoomate in French becomes ABTOMaT afftaamaat in Russian which have similar pronunciation The performance of raw audio waveforms as input features to ConvNet can be further improved by applying silence removal in the audio Also there is scope for improvement by augmenting available data through various conventional techniques like pitch shifting adding random noise and changing speed of audio These help in making neural networks more robust to variations which might be present in real world scenarios There can be further exploration of various feature extraction techniques like ConstantQ transform and Fast Fourier Transform and assessment of their impact on Language Identification There can be further improvements in neural network architectures like concatenating the high level features obtained from 1DConvNet and 2DConvNet before performing classification There can be experiments using deeper networks with skip connections and Inception modules These are known to have positively impacted the performance of Convolutional Neural Networks There are two main contributions of this paper in the domain of spoken language identification Firstly we presented an extensive analysis of raw audio waveforms as input features to 1DConvNet We experimented with various hyperparameters in our 1DConvNet and evaluated their effect on validation accuracy This method is able to bypass the computational overhead of conventional approaches which depend on generation of spectrograms as a necessary preprocesing step We were able to achieve an accauracy of 937 using this technique Next we discussed the enhancement in performance of 2DConvNet using mixup augmentation which is a recently developed technique to prevent overtting on test dataThis approach achieved an accuracy of 954 We also analysed how attention mechanism and recurrent layers impact the performance of networks This approach achieved an accuracy of 950 </s>
<s>[INST] Unsupervised Bilingual Lexicon Induction from Mono-lingual Multimodal Data [/INST] The bilingual lexicon induction task aims to automatically build word translation dictionaries across different languages which is beneficial for various natural language processing tasks such as crosslingual information retrieval BIBREF0 multilingual sentiment analysis BIBREF1 machine translation BIBREF2 and so on Although building bilingual lexicon has achieved success with parallel sentences in resourcerich languages BIBREF2 the parallel data is insufficient or even unavailable especially for resourcescarce languages and it is expensive to collect On the contrary there are abundant multimodal monolingual data on the Internet such as images and their associated tags and descriptions which motivates researchers to induce bilingual lexicon from these nonparallel data without supervision There are mainly two types of monolingual approaches to build bilingual dictionaries in recent works The first is purely textbased which explores the structure similarity between different linguistic space The most popular approach among them is to linearly map source word embedding into the target word embedding space BIBREF3 BIBREF4 The second type utilizes vision as bridge to connect different languages BIBREF5 BIBREF6 BIBREF7 It assumes that words correlating to similar images should share similar semantic meanings So previous visionbased methods search images with multilingual words and translate words according to similarities of visual features extracted from the corresponding images It has been proved that the visualgrounded word representation improves the semantic quality of the words BIBREF8 However previous visionbased methods suffer from two limitations for bilingual lexicon induction Firstly the accurate translation performance is confined to concrete visualrelevant words such as nouns and adjectives as shown in Figure SECREF2 For words without highquality visual groundings previous methods would generate poor translations BIBREF7 Secondly previous works extract visual features from the whole image to represent words and thus require objectcentered images in order to obtain reliable visual groundings However common images usually contain multiple objects or scenes and the word might only be grounded to part of the image therefore the global visual features will be quite noisy to represent the word In this paper we address the two limitations via learning from monolingual multimodal data with both sentence and visual context eg image and caption data to induce bilingual lexicon Such multimodal data is also easily obtained for different languages on the Internet BIBREF9 We propose a multilingual image caption model trained on multiple monolingual image caption data which is able to induce two types of word representations for different languages in the joint space The first is the linguistic feature learned from the sentence context with visual semantic constraints so that it is able to generate more accurate translations for words that are less visualrelevant The second is the localized visual feature which attends to the local region of the object or scene in the image for the corresponding word so that the visual representation of words will be more salient than previous global visual features The two representations are complementary and can be combined to induce better bilingual word translation We carry out experiments on multiple language pairs including GermanEnglish FrenchEnglish and JapaneseEnglish The experimental results show that the proposed multilingual caption model not only achieves better caption performance than independent monolingual models for datascarce languages but also can induce the two types of features linguistic and visual features for different languages in joint spaces Our proposed method consistently outperforms previous stateoftheart visionbased bilingual word induction approaches on different languages The contributions of this paper are as follows The early works for bilingual lexicon induction require parallel data in different languages BIBREF2 systematically investigates various word alignment methods with parallel texts to induce bilingual lexicon However the parallel data is scarce or even unavailable for lowresource languages Therefore methods with less dependency on the availability of parallel corpora are highly desired There are mainly two types of monolingual approaches for bilingual lexicon induction textbased and visionbased methods The textbased methods purely exploit the linguistic information to translate words The initiative works BIBREF10 BIBREF11 utilize word cooccurrences in different languages as clue for word alignment With the improvement in word representation based on deep learning BIBREF3 finds the structure similarity of the deeplearned word embeddings in different languages and employs a parallel vocabulary to learn a linear mapping from the source to target word embeddings BIBREF12 improves the translation performance via adding an orthogonality constraint to the mapping BIBREF13 further introduces a matching mechanism to induce bilingual lexicon with fewer seeds However these models require seed lexicon as the startpoint to train the bilingual mapping Recently BIBREF4 proposes an adversarial learning approach to learn the joint bilingual embedding space without any seed lexicon The visionbased methods exploit images to connect different languages which assume that words corresponding to similar images are semantically alike BIBREF5 collects images with labeled words in different languages to learn word translation with image as pivot BIBREF6 improves the visualbased word translation performance via using more powerful visual representations the CNNbased BIBREF14 features The above works mainly focus on the translation of nouns and are limited in the number of collected languages The recent work BIBREF7 constructs the current largest with respect to the number of language pairs and types of partofspeech multimodal word translation dataset MMID They show that concrete words are easiest for visionbased translation methods while others are much less accurate In our work we alleviate the limitations of previous visionbased methods via exploring images and their captions rather than images with unstructured tags to connect different languages Image captioning has received more and more research attentions Most image caption works focus on the English caption generation BIBREF15 BIBREF16 while there are limited works considering generating multilingual captions The recent WMT workshop BIBREF17 has proposed a subtask of multilingual caption generation where different strategies such as multitask captioning and sourcetotarget translation followed by captioning have been proposed to generate captions in target languages Our work proposes a multilingual image caption model that shares part of the parameters across different languages in order to benefit each other Our goal is to induce bilingual lexicon without supervision of parallel sentences or seed word pairs purely based on the monolingual image caption data In the following we introduce the multilingual image caption model whose objectives for bilingual lexicon induction are two folds 1 explicitly build multilingual word embeddings in the joint linguistic space 2 implicitly extract the localized visual features for each word in the shared visual space The former encodes linguistic information of words while the latter encodes the visualgrounded information which are complementary for bilingual lexicon induction Suppose we have monolingual image caption datasets INLINEFORM0 in the source language and INLINEFORM1 in the target language The images INLINEFORM2 in INLINEFORM3 and INLINEFORM4 do not necessarily overlap but cover overlapped object or scene classes which is the basic assumption of visionbased methods For notation simplicity we omit the superscript INLINEFORM5 for the data sample Each image caption INLINEFORM6 and INLINEFORM7 is composed of word sequences INLINEFORM8 and INLINEFORM9 respectively where INLINEFORM10 is the sentence length The proposed multilingual image caption model aims to generate sentences in different languages to describe the image content which connects the vision and multilingual sentences Figure FIGREF15 illustrates the framework of the caption model which consists of three parts the image encoder word embedding module and language decoder The image encoder encodes the image into the shared visual space We apply the Resnet152 BIBREF18 as our encoder INLINEFORM0 which produces INLINEFORM1 vectors corresponding to different spatial locations in the image DISPLAYFORM0 where INLINEFORM0 The parameter INLINEFORM1 of the encoder is shared for different languages in order to encode all the images in the same visual space The word embedding module maps the onehot word representation in each language into lowdimensional distributional embeddings DISPLAYFORM0 where INLINEFORM0 and INLINEFORM1 is the word embedding matrix for the source and target languages respectively INLINEFORM2 and INLINEFORM3 are the vocabulary size of the two languages The decoder then generates word step by step conditioning on the encoded image feature and previous generated words The probability of generating INLINEFORM0 in the source language is as follows DISPLAYFORM0 where INLINEFORM0 is the hidden state of the decoder at step INLINEFORM1 which is functioned by LSTM BIBREF19 DISPLAYFORM0 The INLINEFORM0 is the dynamically located contextual image feature to generate word INLINEFORM1 via attention mechanism which is the weighted sum of INLINEFORM2 computed by DISPLAYFORM0 DISPLAYFORM1 where INLINEFORM0 is a fully connected neural network The parameter INLINEFORM1 in the decoder includes all the weights in the LSTM and the attention network INLINEFORM2 Similarly INLINEFORM0 is the probability of generating INLINEFORM1 in the target language which shares INLINEFORM2 with the source language By sharing the same parameters across different languages in the encoder and decoder both the visual features and the learned word embeddings for different languages are enforced to project in a joint semantic space To be noted the proposed multilingual parameter sharing strategy is not constrained to the presented image captioning model but can be applied in various image captioning models such as showtell model BIBREF15 and so on We use maximum likelihood as objective function to train the multilingual caption model which maximizes the logprobability of the groundtruth captions DISPLAYFORM0 The proposed multilingual caption model can induce similarities of words in different languages from two aspects the linguistic similarity and the visual similarity In the following we discuss the two types of similarity and then construct the source and target word representations The linguistic similarity is reflected from the learned word embeddings INLINEFORM0 and INLINEFORM1 in the multilingual caption model As shown in previous works BIBREF20 word embeddings learned from the language contexts can capture syntactic and semantic regularities in the language However if the word embeddings of different languages are trained independently they are not in the same linguistic space and we cannot compute similarities directly In our multilingual caption model since images in INLINEFORM2 and INLINEFORM3 share the same visual space the features of sentence INLINEFORM4 and INLINEFORM5 belonging to similar images are bound to be close in the same space with the visual constraints Meanwhile the language decoder is also shared which enforces the word embeddings across languages into the same semantic space in order to generate similar sentence features Therefore INLINEFORM6 and INLINEFORM7 not only encode the linguistic information of different languages but also share the embedding space which enables direct crosslingual similarity comparison We refer the linguistic features of source and target words INLINEFORM8 and INLINEFORM9 as INLINEFORM10 and INLINEFORM11 respectively For the visual similarity the multilingual caption model locates the image region to generate each word base on the spatial attention in Eq EQREF13 which can be used to calculate the localized visual representation of the word However since the attention is computed before word generation the localization performance can be less accurate It also cannot be generalized to image captioning models without spatial attention Therefore inspired by BIBREF21 where they occlude over regions of the image to observe the change of classification probabilities we feed different parts of the image to the caption model and investigate the probability changes for each word in the sentence Algorithm SECREF16 presents the procedure of word localization and the grounded visual feature generation Please note that such visualgrounding is learned unsupervisedly from the image caption data Therefore every word can be represented as a set of grounded visual features the set size equals to the word occurrence number in the dataset We refer the localized visual feature set for source word INLINEFORM0 as INLINEFORM1 for target word INLINEFORM2 as INLINEFORM3 Generating localized visual features Encoded image features INLINEFORM0 sentence INLINEFORM1 Localized visual features for each word INLINEFORM2 each INLINEFORM3 compute INLINEFORM4 according to Eq EQREF10 INLINEFORM5 INLINEFORM6 INLINEFORM7 Since the word representations of the source and target language are in the same space we could directly compute the similarities across languages We apply l2normalization on the word representations and measure with the cosine similarity For linguistic features the similarity is measured as DISPLAYFORM0 However there are a set of visual features associated with one word so the visual similarity measurement between two words is required to take two sets of visual features as input We aggregate the visual features in a single representation and then compute cosine similarity instead of pointwise similarities among two sets DISPLAYFORM0 The reasons for performing aggregation are two folds Firstly the number of visual features is proportional to the word occurrence in our approach instead of fixed numbers as in BIBREF6 BIBREF7 So the computation cost for frequent words are much higher Secondly the aggregation helps to reduce noise which is especially important for abstract words The abstract words such as event are more visually diverse but the overall styles of multiple images can reflect its visual semantics Due to the complementary characteristics of the two features we combine them to predict the word translation The translated word for INLINEFORM0 is DISPLAYFORM0 For image captioning we utilize the multi30k BIBREF22 COCO BIBREF23 and STAIR BIBREF24 datasets The multi30k dataset contains 30k images and annotations under two tasks In task 1 each image is annotated with one English description which is then translated into German and French In task 2 the image is independently annotated with 5 descriptions in English and German respectively For German and English languages we utilize annotations in task 2 For the French language we can only employ French descriptions in task 1 so the training size for French is less than the other two languages The COCO and STAIR datasets contain the same image set but are independently annotated in English and Japanese Since the images in the wild for different languages might not overlap we randomly split the image set into two disjoint parts of equal size The images in each part only contain the monolingual captions We use Moses SMT Toolkit to tokenize sentences and select words occurring more than five times in our vocabulary for each language Table TABREF21 summarizes the statistics of caption datasets For bilingual lexicon induction we use two visual datasets BERGSMA and MMID The BERGSMA dataset BIBREF5 consists of 500 GermanEnglish word translation pairs Each word is associated with no more than 20 images The words in BERGSMA dataset are all nouns The MMID dataset BIBREF7 covers a larger variety of words and languages including 9808 GermanEnglish pairs and 9887 FrenchEnglish pairs The source word can be mapped to multiple target words in their dictionary Each word is associated with no more than 100 retrieved images Since both these image datasets do not contain Japanese language we download the JapanesetoEnglish dictionary online We select words in each dataset that overlap with our caption vocabulary which results in 230 GermanEnglish pairs in BERGSMA dataset 1311 GermanEnglish pairs and 1217 FrenchEnglish pairs in MMID dataset and 2408 JapaneseEnglish pairs For the multilingual caption model we set the word embedding size and the hidden size of LSTM as 512 Adam algorithm is applied to optimize the model with learning rate of 00001 and batch size of 128 The caption model is trained up to 100 epochs and the best model is selected according to caption performance on the validation set We compare our approach with two baseline visionbased methods proposed in BIBREF6 BIBREF7 which measure the similarity of two sets of global visual features for bilingual lexicon induction CNNmean taking the similarity score of the averaged feature of the two image sets CNNavgmax taking the average of the maximum similarity scores of two image sets We evaluate the word translation performance using MRR meanreciprocal rank as follows DISPLAYFORM0 where INLINEFORM0 is the groundtruth translated words for source word INLINEFORM1 and INLINEFORM2 denotes the rank of groundtruth word INLINEFORM3 in the rank list of translation candidates We also measure the precision at K PK score which is the proportion of source words whose groundtruth translations rank in the top K words We set K as 1 5 10 and 20 We first evaluate the captioning performance of the proposed multilingual caption model which serves as the foundation stone for our bilingual lexicon induction method We compare the proposed multilingual caption model with the monolingual model which consists of the same model structure but is trained separately for each language Table TABREF22 presents the captioning results on the multi30k dataset where all the languages are from the Latin family The multilingual caption model achieves comparable performance with monolingual model for data sufficient languages such as English and German and significantly outperforms the monolingual model for the datascarce language French with absolute 322 gains on the CIDEr metric For languages with distinctive grammar structures such as English and Japanese the multilingual model is also on par with the monolingual model as shown in Table TABREF29 To be noted the multilingual model contains about twice less of parameters than the independent monolingual models which is more computation efficient We visualize the learned visual groundings from the multilingual caption model in Figure FIGREF32 Though there is certain mistakes such as musicians in the bottom image most of the words are grounded well with correct objects or scenes and thus can obtain more salient visual features We induce the linguistic features and localized visual features from the multilingual caption model for word translation from the source to target languages Table TABREF30 presents the GermantoEnglish word translation performance of the proposed features In the BERGSMA dataset the visual features achieve better translation results than the linguistic features while they are inferior to the linguistic features in the MMID dataset This is because the vocabulary in BERGSMA dataset mainly consists of nouns but the partsofspeech is more diverse in the MMID dataset The visual features contribute most to translate concrete noun words while the linguistic features are beneficial to other abstract words The fusion of the two features performs best for word translation which demonstrates that the two features are complementary with each other We also compare our approach with previous stateoftheart visionbased methods in Table TABREF30 Since our visual feature is the averaged representation it is fair to compare with the CNNmean baseline method where the only difference lies in the feature rather than similarity measurement The localized features perform substantially better than the global image features which demonstrate the effectiveness of the attention learned from the caption model The combination of visual and linguistic features also significantly improves the stateoftheart visualbased CNNavgmax method with 116 and 67 absolute gains on P1 on the BERGSMA and MMID dataset respectively In Figure FIGREF36 we present the word translation performance for different POS partofspeech labels We assign the POS label for words in different languages according to their translations in English We can see that the previous stateoftheart visionbased approach contributes mostly to noun words which are most visualrelevant while generates poor translations for other partofspeech words Our approach however substantially improves the translation performance for all partofspeech classes For concrete words such as nouns and adjectives the localized visual features produce better representation than previous global visual features and for other partofspeech words the linguistic features which are learned with sentence context are effective to complement the visual features The fusion of the linguistic and localized visual features in our approach leads to significant performance improvement over the stateoftheart baseline method for all types of POS classes Some correct and incorrect translation examples for different POS classes are shown in Table TABREF34 The visualrelevant concrete words are easier to translate such as phone and red But our approach still generates reasonable results for abstract words such as area and functional words such as for due to the fusion of visual and sentence contexts We also evaluate the influence of different image captioning structures on the bilingual lexicon induction We compare our attention model attn with the vanilla showtell model mp BIBREF15 which applies mean pooling over spatial image features to generate captions and achieves inferior caption performance to the attention model Table TABREF35 shows the word translation performance of the two caption models The attention model with better caption performance also induces better linguistic and localized visual features for bilingual lexicon induction Nevertheless the showtell model still outperforms the previous visionbased methods in Table TABREF30 Beside GermantoEnglish word translation we expand our approach to other languages including French and Japanese which is more distant from English The FrenchtoEnglish word translation performance is presented in Table TABREF39 To be noted the training data of the French captions is five times less than German captions which makes FrenchtoEnglish word translation performance less competitive with GermantoEnglish But similarly the fusion of linguistic and visual features achieves the best performance which has boosted the baseline methods with 42 relative gains on the MRR metric and 174 relative improvements on the P20 metric Table TABREF40 shows the JapanesetoEnglish word translation performance Since the language structures of Japanese and English are quite different the linguistic features learned from the multilingual caption model are less effective but still can benefit the visual features to improve the translation quality The results on multiple diverse language pairs further demonstrate the generalization of our approach for different languages In this paper we address the problem of bilingual lexicon induction without reliance on parallel corpora Based on the experience that we humans can understand words better when they are within the context and can learn word translations with external world eg images as pivot we propose a new visionbased approach to induce bilingual lexicon with images and their associated sentences We build a multilingual caption model from multiple monolingual multimodal data to map words in different languages into joint spaces Two types of word representation linguistic features and localized visual features are induced from the caption model The two types of features are complementary for word translation Experimental results on multiple language pairs demonstrate the effectiveness of our proposed method which leads to significant performance improvement over the stateoftheart visionbased approaches for all types of partofspeech In the future we will further expand the visionpivot approaches for zeroresource machine translation without parallel sentences This work was supported by National Natural Science Foundation of China under Grant No 61772535 National Key Research and Development Plan under Grant No 2016YFB1001202 and Research Foundation of Beijing Municipal Science Technology Commission under Grant No Z181100008918002 </s>
<s>[INST] AraNet: A Deep Learning Toolkit for Arabic Social Media [/INST] The proliferation of social media has made it possible to study large online communities at scale thus making important discoveries that can facilitate decision making guide policies improve health and wellbeing aid disaster response etc The wide host of languages languages varieties and dialects used on social media and the nuanced differences between users of various backgrounds eg different age groups gender identities make it especially difficult to derive sufficiently valuable insights based on single prediction tasks For these reasons it would be desirable to offer NLP tools that can help stitch together a complete picture of an event across different geographical regions as impacting and being impacted by individuals of different identities We offer AraNet as one such tool for Arabic social media processing For Arabic a collection of languages and varieties spoken by a wide population of sim 400 million native speakers covering a vast geographical region shown in Figure FIGREF2 no such suite of tools currently exists Many works have focused on sentiment analysis eg BIBREF0 BIBREF1 BIBREF2 BIBREF3 BIBREF4 BIBREF5 BIBREF6 BIBREF7 and dialect identification BIBREF8 BIBREF9 BIBREF10 BIBREF11 BIBREF12 BIBREF13 However there is generally rarity of resources on other tasks such as gender and age detection This motivates our toolkit which we hope can meet the current critical need for studying Arabic communities online This is especially valuable given the waves of protests uprisings and revolutions that have been sweeping the region during the last decade Although we create new models for tasks such as sentiment analysis and gender detection as part of AraNet our focus is more on putting together the toolkit itself and providing strong baselines that can be compared to Hence although we provide some baseline models for some of the tasks we do not explicitly compare to previous research since most existing works either exploit smaller data and so it will not be a fair comparison use methods predating BERT and so will likely be outperformed by our models For many of the tasks we model there have not been standard benchmarks for comparisons across models This makes it difficult to measure progress and identify areas worthy of allocating efforts and budgets As such by publishing our toolkit models we believe modelbased comparisons will be one way to relieve this bottleneck For these reasons we also package models from our recent works on dialect BIBREF12 and irony BIBREF14 as part of AraNet The rest of the paper is organized as follows In Section SECREF2 we describe our methods In Section SECREF3 we describe or refer to published literature for the dataset we exploit for each task and provide results our corresponding model acquires Section SECREF4 is about AraNet design and use and we overview related works in Section SECREF5 We conclude in Section SECREF6 Supervised BERT Across all our tasks we use Bidirectional Encoder Representations from Transformers BERT BERT BIBREF15 dispenses with recurrence and convolution It is based on a multilayer bidirectional Transformer encoder BIBREF16 with multihead attention It uses masked language models to enable pretrained deep bidirectional representations in addition to a binary next sentence prediction task The pretrained BERT can be easily finetuned on a wide host of sentencelevel and tokenlevel tasks All our models are trained in a fully supervised fashion with dialect id being the only task where we leverage semisupervised learning We briefly outline our semisupervised methods next SelfTraining Only for the dialect id task we investigate augmenting our humanlabeled training data with automaticallypredicted data from selftraining Selftraining is a wrapper method for semisupervised learning BIBREF17 BIBREF18 where a classifier is initially trained on a usually small set of labeled samples textbf textit Dl then is used to classify an unlabeled sample set textbf textit Du Most confident predictions acquired by the original supervised model are added to the labeled set and the model is iteratively retrained We perform selftraining using different confidence thresholds and choose different percentages from predicted data to add to our train We only report best settings here and the reader is referred to our winning system on the MADAR shared task for more details on these different settings BIBREF12 Implementation Models Parameters For all our tasks we use the BERTBase Multilingual Cased model released by the authors The model is trained on 104 languages including Arabic with 12 layer 768 hidden units each 12 attention heads and has 110M parameters in entire model The model has 119547 shared WordPieces vocabulary and was pretrained on the entire Wikipedia for each language For finetuning we use a maximum sequence size of 50 tokens and a batch size of 32 We set the learning rate to 2e5 and train for 15 epochs and choose the best model based on performance on a development set We use the same hyperparameters in all of our BERT models We finetune BERT on each respective labeled dataset for each task For BERT input we apply WordPiece tokenization setting the maximal sequence length to 50 wordsWordPieces For all tasks we use a TensorFlow implementation An exception is the sentiment analysis task where we used a PyTorch implementation with the same hyperparameters but with a learning rate 2e6 Preprocessing Most of our training data in all tasks come from Twitter Exceptions are in some of the datasets we use for sentiment analysis which we point out in Section SECREF23 Our preprocessing thus incorporates methods to clean tweets other datasets eg from the news domain being much less noisy For preprocessing we remove all usernames URLs and diacritics in the data ArabTweet For modeling age and gender we use ArapTweet BIBREF19 which we will refer to as ArabTweet Arabtweet is a tweet dataset of 11 Arabic regions from 17 different countries For each region data from 100 Twitter users were crawled Users needed to have posted at least 2000 and were selected based on an initial list of seed words characteristic of each region The seed list included words such as barsha many for Tunisian Arabic and wayed many for Gulf Arabic BIBREF19 employed human annotators to verify that users do belong to each respective region Annotators also assigned gender labels from the set male female and age group labels from the set under25 25to34 above35 at the userlevel which in turn is assigned at tweet level Tweets with less than 3 words and retweets were removed Refer to BIBREF19 for details about how annotation was carried out We provide a description of the data in Table TABREF10 Table TABREF10 also provides class breakdown across our splitsWe note that BIBREF19 do not report classification models exploiting the data We shuffle the Arabtweet dataset and split it into 80 training TRAIN 10 development DEV and 10 test TEST The distribution of classes in our splits is in Table TABREF10 For preprocessing we reduce 2 or more consecutive repetitions of the same character into only 2 and remove diacritics With this dataset we train a small unidirectional GRU smallGRU with a single 500units hidden layer and dropout05 as a baseline SmallGRU is trained with the TRAIN set batch size 8 and up to 30 words of each sequence Each word in the input sequence is represented as a trainable 300dimension vector We use the top 100K words which are weighted by mutual information as our vocabulary in the embedding layer We evaluate the model on TEST set Table TABREF14 show smallGRU obtain3629 XX acc on age classification and 5337 acc on gender detection We also report the accuracy of finetuned BERT models on TEST set in Table TABREF14 We can find that BERT models significantly perform better than our baseline on the two tasks It improve with 1513 for age and 1193 acc for gender over the smallGRU UBC Twitter Gender Dataset We also develop an inhouse Twitter dataset for gender We manually labeled 1989 users from each of the 21 Arab countries The data had 1246 male 528 female and 215 unknown users We remove the unknown category and balance the dataset to have 528 from each of the two male and female categories We ended with 69509 tweets for male and 67511 tweets for female We split the users into 80 TRAIN set 110750 tweets for 845 users 10 DEV set 14158 tweets for 106 users and 10 TEST set 12112 tweets for 105 users We then model this dataset with BERTBase Multilingual Cased model and evaluate on development and test sets Table TABREF15 shows that finetuned model obtains 6242 acc on DEV and 6054 acc on TEST We also combine the Arabtweet gender dataset with our UBCTwitter dataset for gender on training development and test respectively to obtain new TRAIN DEV and TEST We finetune the BERTBase Multilingual Cased model with the combined TRAIN and evaluate on combined DEV and TEST As Table TABREF15 shows the model obtains 6532 acc on combined DEV set and 6532 acc on combined TEST set This is the model we package in AraNet The dialect identification model in AraNet is based on our winning system in the MADAR shared task 2 BIBREF20 as described in BIBREF12 The corpus is divided into train dev and test and the organizers masked test set labels We lost some tweets from training data when we crawled using tweet ids ultimately acquiring 2036 TRAINA 281 DEV and 466 TEST We also make use of the task 1 corpus 95000 sentences BIBREF21 More specifically we concatenate the task 1 data to the training data of task 2 to create TRAINB Again note that TEST labels were only released to participants after the official task evaluation Table TABREF17 shows statistics of the data We used tweets from 21 Arab countries as distributed by task organizers except that we lost some tweets when we crawled using tweet ids We had 2036 TRAINA 281 DEV and 466 TEST For our experiments we also make use of the task 1 corpus 95000 sentences BIBREF21 More specifically we concatenate the task 1 data to the training data of task 2 to create TRAINB Note that both DEV and TEST across our experiments are exclusively the data released in task 2 as described above TEST labels were only released to participants after the official task evaluation Table TABREF17 shows statistics of the data More information about the data is in BIBREF21 We use TRAINA to perform supervised modeling with BERT and TRAINB for self training under various conditions We refer the reader to BIBREF12 for more information about our different experimental settings on dialect id We acquire our best results with selftraining with a classification accuracy of 4939 and F1 score at 3544 This is the winning system model in the MADAR shared task and we showed in BIBREF12 that our tweetlevel predictions can be ported to userlevel prediction On userlevel detection our models perform superbly with 7740 acc and 7170 F1 score on unseen MADAR blind test data We make use of two datasets the LAMADINA dataset from BIBREF22 a Twitter dataset with a combination of gold labels from BIBREF23 and distant supervision labels The tweets are labeled with the Plutchik 8 primary emotions from the set anger anticipation disgust fear joy sadness surprise trust The distant supervision approach depends on use of seed phrases with the Arabic first person pronoun Eng I a seed word expressing an emotion eg Eng happy The manually labeled part of the data comprises tweets carrying the seed phrases verified by human annotators 9064 tweets for inclusion of the respective emotion The rest of the dataset is only labeled using distant supervision LAMADIST 182605 tweets For more information about the dataset readers are referred to BIBREF22 The data distribution over the emotion classes is in Table TABREF20 We combine LAMADINA and LAMADIST training set and refer to this new training set as LAMAD2 189903 tweets We finetune BERTBased Multilingual Cased on the LAMAD2 and evaluate the model with same DEV and TEST sets from LAMADINA On DEV set the finetuned BERT model obtains 6143 on accuracy and 5883 on F1 score On TEST set we acquire 6238 acc and 6032 F1 score We use the dataset for irony identification on Arabic tweets released by IDATFIRE2019 sharedtask BIBREF24 The shared task dataset contains 5030 tweets related to different political issues and events in the Middle East taking place between 2011 and 2018 Tweets are collected using predefined keywords ie targeted political figures or events and the positive class involves ironic hashtags such as sokhria tahakoum and maskhara Arabic variants for irony Duplicates retweets and nonintelligible tweets are removed by organizers Tweets involve both MSA as well as dialects at various degrees of granularity such as Egyptian Gulf and Levantine IDATFIRE2019 BIBREF24 is set up as a binary classification task where tweets are assigned labels from the set ironic nonironic A total of 4024 tweets were released by organizers as training data In addition 1006 tweets were used by organizers as test data Test labels were not release and teams were expected to submit the predictions produced by their systems on the test split For our models we split the 4024 released training data into 90 TRAIN n3621 tweets ironic1882 and nonironic1739 and 10 DEV n403 tweets ironic209 and nonironic194 We use the same smallGRU architecture of Section 31 as our baselines We finetune BERTBased Multilingual Cased model on our TRAIN and evaluate on DEV The smallGRU obtain 7370 accuracy and 7347 F1 score BERT model significantly outperformance than smallGRU which achieve 8164 accuracy and 8162 F1 score We collect 15 datasets related to sentiment analysis of Arabic including MSA and dialects BIBREF25 BIBREF26 BIBREF27 BIBREF1 BIBREF28 BIBREF29 BIBREF30 BIBREF31 BIBREF32 BIBREF33 BIBREF34 Table TABREF28 shows all the corpora we use These datasets involve different types of sentiment analysis tasks such as binary classification ie negative or positive 3way classification ie negative neutral or positive and subjective language detection To combine these datasets for binary sentiment classification we normalize different types of label to binary labels in the set lbrace positiveprime negativeprime rbrace by following rules Positive Pos or HighPos to positive Negative Neg or HighNeg to negative Exclude samples which label is not positive or negative such as obj mixed neut or neutral After label normalization we obtain 126766 samples We split this datase into 80 training TRAIN 10 development DEV and 10 test TEST The distribution of classes in our splits is presented in Table TABREF27 We finetune pretrained BERT on the TRAIN set using PyTorch implementation with 2e6 learning rate and 15 epochs as explained in Section SECREF2 Our best model on the DEV set obtains 8024 acc and 8024 F1 We evaluate this best model on TEST set and obtain 7731 acc and 7667 F1 AraNet consists of identifier tools including age gender dialect emotion irony and sentiment Each tool comes with an embedded model The tool comes with modules for performing normalization and tokenization AraNet can be used as a Python library or a commandline tool Python Library Importing AraNet module as a Python library provides identifiers functions Prediction is based on a text or a path to a file and returns the identified class label It also returns the probability distribution over all available class labels if needed Figure FIGREF34 shows two examples of using the tool as Python library Commandline tool AraNet provides scripts supporting both commandline and interactive mode Commandline mode accepts a text or file path Interaction mode is good for quick interactive linebyline experiments and also pipeline redirections AraNet is available through pip or from source on GitHub with detailed documentation As we pointed out earlier there has been several works on some of the tasks but less on others By far Arabic sentiment analysis has been the most popular task Several works have been performed for MSA BIBREF35 BIBREF0 and dialectal BIBREF1 BIBREF2 BIBREF3 BIBREF4 BIBREF5 BIBREF6 BIBREF7 sentiment analysis A number of works have also been published for dialect detection including BIBREF9 BIBREF10 BIBREF8 BIBREF11 Some works have been performed on the tasks of age detection BIBREF19 BIBREF36 gender detection BIBREF19 BIBREF36 irony identification BIBREF37 BIBREF24 and emotion analysis BIBREF38 BIBREF22 A number of tools exist for Arabic natural language processingincluding Penn Arabic treebank BIBREF39 POS tagger BIBREF40 BIBREF41 Buckwalter Morphological Analyzer BIBREF42 and Mazajak BIBREF7 for sentiment analysis We presented AraNet a deep learning toolkit for a host of Arabic social media processing AraNet predicts age dialect gender emotion irony and sentiment from social media posts It delivers stateoftheart and competitive performance on these tasks and has the advantage of using a unified simple framework based on the recentlydeveloped BERT model AraNet has the potential to alleviate issues related to comparing across different Arabic social media NLP tasks by providing one way to test new models against AraNet predictions Our toolkit can be used to make important discoveries on the wide region of the Arab world and can enhance our understating of Arab online communication AraNet will be publicly available upon acceptance </s>
<s>[INST] Generative Adversarial Nets for Multiple Text Corpora [/INST] Generative adversarial nets GAN Goodfellow et al 2014 belong to a class of generative models which are trainable and can generate artificial data examples similar to the existing ones In a GAN model there are two submodels simultaneously trained a generative model INLINEFORM0 from which artificial data examples can be sampled and a discriminative model INLINEFORM1 which classifies real data examples and artificial ones from INLINEFORM2 By training INLINEFORM3 to maximize its generation power and training INLINEFORM4 to minimize the generation power of INLINEFORM5 so that ideally there will be no difference between the true and artificial examples a minimax problem can be established The GAN model has been shown to closely replicate a number of image data sets such as MNIST Toronto Face Database TFD CIFAR10 SVHN and ImageNet Goodfellow et al 2014 Salimans et al 2016 The GAN model has been extended to text data in a number of ways For instance Zhang et al 2016 applied a longshort term memory Hochreiter and Schmidhuber 1997 generator and approximated discretization to generate text data Moreover Li et al 2017 applied the GAN model to generate dialogues ie pairs of questions and answers Meanwhile the GAN model can also be applied to generate bagofwords embeddings of text data which focus more on key terms in a text document rather than the original document itself Glover 2016 provided such a model with the energybased GAN Zhao et al 2017 To the best of our knowledge there has been no literature on applying the GAN model to multiple corpora of text data Multiclass GANs Liu and Tuzel 2016 Mirza and Osindero 2014 have been proposed but a class in multiclass classification is not the same as multiple corpora Because knowing the underlying corpus membership of each text document can provide better information on how the text documents are organized and documents from the same corpus are expected to share similar topics or key words considering the membership information can benefit the training of a text model from a supervised perspective We consider two problems associated with training multicorpus text data 1 Given a separate set of word embeddings from each corpus such as the word2vec embeddings Mikolov et al 2013 how to obtain a better set of crosscorpus word embeddings from them 2 How to incorporate the generation of document embeddings from different corpora in a single GAN model For the first problem we train a GAN model which discriminates documents represented by different word embeddings and train the crosscorpus word embedding so that it is similar to each existing word embedding per corpus For the second problem we train a GAN model which considers both crosscorpus and percorpus topics in the generator and applies a discriminator which considers each original and artificial document corpus We also show that with sufficient training the distribution of the artificial document embeddings is equivalent to the original ones Our work has the following contributions 1 we extend GANs to multiple corpora of text data 2 we provide applications of GANs to finetune word embeddings and to create robust document embeddings and 3 we establish theoretical convergence results of the multiclass GAN model Section 2 reviews existing GAN models related to this paper Section 3 describes the GAN models on training crosscorpus word embeddings and generating document embeddings for each corpora and explains the associated algorithms Section 4 presents the results of the two models on text data sets and transfers them to supervised learning Section 5 summarizes the results and concludes the paper In a GAN model we assume that the data examples INLINEFORM0 are drawn from a distribution INLINEFORM1 and the artificial data examples INLINEFORM2 are transformed from the noise distribution INLINEFORM3 The binary classifier INLINEFORM4 outputs the probability of a data example or an artificial one being an original one We consider the following minimax problem DISPLAYFORM0 With sufficient training it is shown in Goodfellow et al 2014 that the distribution of artificial data examples INLINEFORM0 is eventually equivalent to the data distribution INLINEFORM1 ie INLINEFORM2 Because the probabilistic structure of a GAN can be unstable to train the Wasserstein GAN Arjovsky et al 2017 is proposed which applies a 1Lipschitz function as a discriminator In a Wasserstein GAN we consider the following minimax problem DISPLAYFORM0 These GANs are for the general purpose of learning the data distribution in an unsupervised way and creating perturbed data examples resembling the original ones We note that in many circumstances data sets are obtained with supervised labels or categories which can add explanatory power to unsupervised models such as the GAN We summarize such GANs because a corpus can be potentially treated as a class The main difference is that classes are purely for the task of classification while we are interested in embeddings that can be used for any supervised or unsupervised task For instance the CoGAN Liu and Tuzel 2016 considers pairs of data examples from different categories as follows INLINEFORM0 where the weights of the first few layers of INLINEFORM0 and INLINEFORM1 ie close to INLINEFORM2 are tied Mirza and Osindero 2014 proposed the conditional GAN where the generator INLINEFORM3 and the discriminator INLINEFORM4 depend on the class label INLINEFORM5 While these GANs generate samples resembling different classes other variations of GANs apply the class labels for semisupervised learning For instance Salimans et al 2016 proposed the following objective DISPLAYFORM0 where INLINEFORM0 has INLINEFORM1 classes plus the INLINEFORM2 th artificial class Similar models can be found in Odena 2016 the CatGAN in Springenberg 2016 and the LSGAN in Mao et al 2017 However all these models consider only images and do not produce word or document embeddings therefore being different from our models For generating real text Zhang et al 2016 proposed textGAN in which the generator has the following form DISPLAYFORM0 where INLINEFORM0 is the noise vector INLINEFORM1 is the generated sentence INLINEFORM2 are the words and INLINEFORM3 A unidimensional convolutional neural network Collobert et al 2011 Kim 2014 is applied as the discriminator Also a weighted softmax function is applied to make the argmax function differentiable With textGAN sentences such as we show the efficacy of our new solvers making it up to identify the optimal random vector can be generated Similar models can also be found in Wang et al 2016 Press et al 2017 and Rajeswar et al 2017 The focus of our work is to summarize information from longer documents so we apply document embeddings such as the tfidf to represent the documents rather than to generate real text For generating bagofwords embeddings of text Glover 2016 proposed the following model DISPLAYFORM0 and INLINEFORM0 is the mean squared error of a denoising autoencoder and INLINEFORM1 is the onehot word embedding of a document Our models are different from this model because we consider tfidf document embeddings for multiple text corpora in the deGAN model Section 32 and weGAN Section 31 can be applied to produce word embeddings Also we focus on robustness based on several corpora while Glover 2016 assumed a single corpus For extracting word embeddings given text data Mikolov et al 2013 proposed the word2vec model for which there are two variations the continuous bagofwords cBoW model Mikolov et al 2013b where the neighboring words are used to predict the appearance of each word the skipgram model where each neighboring word is used individually for prediction In GloVe Pennington et al 2013 a bilinear regression model is trained on the log of the word cooccurrence matrix In these models the weights associated with each word are used as the embedding For obtaining document embeddings the para2vec model Le and Mikolov 2014 adds perparagraph vectors to train word2vectype models so that the vectors can be used as embeddings for each paragraph A simpler approach by taking the average of the embeddings of each word in a document and output the document embedding is exhibited in Socher et al 2013 Suppose we have a number of different corpora INLINEFORM0 which for example can be based on different categories or sentiments of text documents We suppose that INLINEFORM1 INLINEFORM2 where each INLINEFORM3 represents a document The words in all corpora are collected in a dictionary and indexed from 1 to INLINEFORM4 We name the GAN model to train crosscorpus word embeddings as weGAN where we stands for word embeddings and the GAN model to generate document embeddings for multiple corpora as deGAN where de stands for document embeddings We assume that for each corpora INLINEFORM0 we are given word embeddings for each word INLINEFORM1 where INLINEFORM2 is the dimension of each word embedding We are also given a classification task on documents that is represented by a parametric model INLINEFORM3 taking document embeddings as feature vectors We construct a GAN model which combines different sets of word embeddings INLINEFORM4 INLINEFORM5 into a single set of word embeddings INLINEFORM6 Note that INLINEFORM7 are given but INLINEFORM8 is trained Here we consider INLINEFORM9 as the generator and the goal of the discriminator is to distinguish documents represented by the original embeddings INLINEFORM10 and the same documents represented by the new embeddings INLINEFORM11 Next we describe how the documents are represented by a set of embeddings INLINEFORM0 and INLINEFORM1 For each document INLINEFORM2 we define its document embedding with INLINEFORM3 as follows DISPLAYFORM0 where INLINEFORM0 can be any mapping Similarly we define the document embedding of INLINEFORM1 with INLINEFORM2 as follows with INLINEFORM3 trainable DISPLAYFORM0 In a typical example word embeddings would be based on word2vec or GLoVe Function INLINEFORM0 can be based on tfidf ie INLINEFORM1 where INLINEFORM2 is the word embedding of the INLINEFORM3 th word in the INLINEFORM4 th corpus INLINEFORM5 and INLINEFORM6 is the tfidf representation of the INLINEFORM7 th document INLINEFORM8 in the INLINEFORM9 th corpus INLINEFORM10 To train the GAN model we consider the following minimax problem DISPLAYFORM0 where INLINEFORM0 is a discriminator of whether a document is original or artificial Here INLINEFORM1 is the label of document INLINEFORM2 with respect to classifier INLINEFORM3 and INLINEFORM4 is a unit vector with only the INLINEFORM5 th component being one and all other components being zeros Note that INLINEFORM6 is equivalent to INLINEFORM7 but we use the former notation due to its brevity The intuition of problem 8 is explained as follows First we consider a discriminator INLINEFORM0 which is a feedforward neural network FFNN with binary outcomes and classifies the document embeddings INLINEFORM1 against the original document embeddings INLINEFORM2 Discriminator INLINEFORM3 minimizes this classification error ie it maximizes the loglikelihood of INLINEFORM4 having label 0 and INLINEFORM5 having label 1 This corresponds to DISPLAYFORM0 For the generator INLINEFORM0 we wish to minimize 8 against INLINEFORM1 so that we can apply the minimax strategy and the combined word embeddings INLINEFORM2 would resemble each set of word embeddings INLINEFORM3 Meanwhile we also consider classifier INLINEFORM4 with INLINEFORM5 outcomes and associates INLINEFORM6 with label INLINEFORM7 so that the generator INLINEFORM8 can learn from the document labeling in a semisupervised way If the classifier INLINEFORM0 outputs a INLINEFORM1 dimensional softmax probability vector we minimize the following against INLINEFORM2 which corresponds to 8 given INLINEFORM3 and INLINEFORM4 DISPLAYFORM0 For the classifier INLINEFORM0 we also minimize its negative loglikelihood DISPLAYFORM0 Assembling 911 together we retrieve the original minimax problem 8 We train the discriminator and the classifier INLINEFORM0 and the combined embeddings INLINEFORM1 according to 911 iteratively for a fixed number of epochs with the stochastic gradient descent algorithm until the discrimination and classification errors become stable The algorithm for weGAN is summarized in Algorithm 1 and Figure 1 illustrates the weGAN model Algorithm 1 Train INLINEFORM0 based on INLINEFORM1 from all corpora INLINEFORM2 Randomly initialize the weights and biases of the classifier INLINEFORM3 and discriminator INLINEFORM4 Until maximum number of iterations reached Update INLINEFORM5 and INLINEFORM6 according to 9 and 11 given a minibatch INLINEFORM7 of training examples INLINEFORM8 Update INLINEFORM9 according to 10 given a minibatch INLINEFORM10 of training examples INLINEFORM11 Output INLINEFORM12 as the crosscorpus word embeddings In this section our goal is to generate document embeddings which would resemble real document embeddings in each corpus INLINEFORM0 INLINEFORM1 We construct INLINEFORM2 generators INLINEFORM3 so that INLINEFORM4 generate artificial examples in corpus INLINEFORM5 As in Section 31 there is a certain document embedding such as tfidf bagofwords or para2vec Let INLINEFORM6 We initialize a noise vector INLINEFORM7 where INLINEFORM8 and INLINEFORM9 is any noise distribution For a generator INLINEFORM0 represented by its parameters we first map the noise vector INLINEFORM1 to the hidden layer which represents different topics We consider two hidden vectors INLINEFORM2 for general topics and INLINEFORM3 for specific topics per corpus DISPLAYFORM0 Here INLINEFORM0 represents a nonlinear activation function In this model the bias term can be ignored in order to prevent the mode collapse problem of the generator Having the hidden vectors we then map them to the generated document embedding with another activation function INLINEFORM1 DISPLAYFORM0 To summarize we may represent the process from noise to the document embedding as follows DISPLAYFORM0 Given the generated document embeddings INLINEFORM0 we consider the following minimax problem to train the generator INLINEFORM1 and the discriminator INLINEFORM2 INLINEFORM3 INLINEFORM4 Here we assume that any document embedding INLINEFORM0 in corpus INLINEFORM1 is a sample with respect to the probability density INLINEFORM2 Note that when INLINEFORM3 the discriminator part of our model is equivalent to the original GAN model To explain 15 first we consider the discriminator INLINEFORM0 Because there are multiple corpora of text documents here we consider INLINEFORM1 categories as output of INLINEFORM2 from which categories INLINEFORM3 represent the original corpora INLINEFORM4 and categories INLINEFORM5 represent the generated document embeddings eg bagofwords from INLINEFORM6 Assume the discriminator INLINEFORM7 a feedforward neural network outputs the distribution of a text document being in each category We maximize the loglikelihood of each document being in the correct category against INLINEFORM8 DISPLAYFORM0 Such a classifier does not only classifies text documents into different categories but also considers INLINEFORM0 fake categories from the generators When training the generators INLINEFORM1 we minimize the following which makes a comparison between the INLINEFORM2 th and INLINEFORM3 th categories DISPLAYFORM0 The intuition of 17 is that for each generated document embedding INLINEFORM0 we need to decrease INLINEFORM1 which is the probability of the generated embedding being correctly classified and increase INLINEFORM2 which is the probability of the generated embedding being classified into the target corpus INLINEFORM3 The ratio in 17 reflects these two properties We iteratively train 16 and 17 until the classification error of INLINEFORM0 becomes stable The algorithm for deGAN is summarized in Algorithm 2 and Figure 2 illustrates the deGAN model Algorithm 2 Randomly initialize the weights of INLINEFORM0 Initialize the discriminator INLINEFORM1 with the weights of the first layer which takes document embeddings as the input initialized by word embeddings and other parameters randomly initialized Until maximum number of iterations reached Update INLINEFORM2 according to 16 given a minibatch of training examples INLINEFORM3 and samples from noise INLINEFORM4 Update INLINEFORM5 according to 17 given a minibatch of training examples INLINEFORM6 and samples form noise INLINEFORM7 Output INLINEFORM8 as generators of document embeddings and INLINEFORM9 as a corpus classifier We next show that from 15 the distributions of the document embeddings from the optimal INLINEFORM0 are equal to the data distributions of INLINEFORM1 which is a generalization of Goodfellow et al 2014 to the multicorpus scenario Proposition 1 Let us assume that the random variables INLINEFORM0 are continuous with probability density INLINEFORM1 which have bounded support INLINEFORM2 INLINEFORM3 is a continuous random variable with bounded support and activations INLINEFORM4 and INLINEFORM5 are continuous and that INLINEFORM6 are solutions to 15 Then INLINEFORM7 the probability density of the document embeddings from INLINEFORM8 INLINEFORM9 are equal to INLINEFORM10 Proof Since INLINEFORM0 is bounded all of the integrals exhibited next are welldefined and finite Since INLINEFORM1 INLINEFORM2 and INLINEFORM3 are continuous it follows that for any parameters INLINEFORM4 is a continuous random variable with probability density INLINEFORM5 with finite support From the first line of 15 INLINEFORM0 This problem reduces to INLINEFORM0 subject to INLINEFORM1 the solution of which is INLINEFORM2 INLINEFORM3 Therefore the solution to 18 is DISPLAYFORM0 We then obtain from the second line of 15 that INLINEFORM0 From nonnegativity of the KullbackLeibler divergence we conclude that INLINEFORM0 In the experiments we consider four data sets two of them newly created and the remaining two already public CNN TIME 20 Newsgroups and Reuters21578 The code and the two new data sets are available at githubcombaiyangwangemgan For the preprocessing of all the documents we transformed all characters to lower case stemmed the documents and ran the word2vec model on each corpora to obtain word embeddings with a size of 300 In all subsequent models we only consider the most frequent INLINEFORM0 words across all corpora in a data set The document embedding in weGAN is the tfidf weighted word embedding transformed by the INLINEFORM0 activation ie DISPLAYFORM0 For deGAN we use INLINEFORM0 normalized tfidf as the document embedding because it is easier to interpret than the transformed embedding in 20 For weGAN the crosscorpus word embeddings are initialized with the word2vec model trained from all documents For training our models we apply a learning rate which increases linearly from INLINEFORM0 to INLINEFORM1 and train the models for 100 epochs with a batch size of 50 per corpus The classifier INLINEFORM2 has a single hidden layer with 50 hidden nodes and the discriminator with a single hidden layer INLINEFORM3 has 10 hidden nodes All these parameters have been optimized For the labels INLINEFORM4 in 8 we apply corpus membership of each document For the noise distribution INLINEFORM0 for deGAN we apply the uniform distribution INLINEFORM1 In 14 for deGAN INLINEFORM2 and INLINEFORM3 so that the model outputs document embedding vectors which are comparable to INLINEFORM4 normalized tfidf vectors for each document For the discriminator INLINEFORM5 of deGAN we apply the word2vec embeddings based on all corpora to initialize its first layer followed by another hidden layer of 50 nodes For the discriminator INLINEFORM6 we apply a learning rate of INLINEFORM7 and for the generator INLINEFORM8 we apply a learning rate of INLINEFORM9 because the initial training phase of deGAN can be unstable We also apply a batch size of 50 per corpus For the softmax layers of deGAN we initialize them with the log of the topicword matrix in latent Dirichlet allocation LDA Blei et al 2003 in order to provide intuitive estimates For weGAN we consider two metrics for comparing the embeddings trained from weGAN and those trained from all documents 1 applying the document embeddings to cluster the documents into INLINEFORM0 clusters with the Kmeans algorithm and calculating the Rand index RI Rand 1971 against the original corpus membership 2 finetuning the classifier INLINEFORM1 and comparing the classification error against an FFNN of the same structure initialized with word2vec w2v For deGAN we compare the performance of finetuning the discriminator of deGAN for document classification and the performance of the same FFNN Each supervised model is trained for 500 epochs and the validation data set is used to choose the best epoch In the CNN data set we collected all news links on wwwcnncom in the GDELT 10 Event Database from April 1st 2013 to July 7 2017 We then collected the news articles from the links and kept those belonging to the three largest categories politics world and US We then divided these documents into INLINEFORM0 training documents from which INLINEFORM1 validation documents are held out and INLINEFORM2 testing documents We hypothesize that because weGAN takes into account document labels in a semisupervised way the embeddings trained from weGAN can better incorporate the labeling information and therefore produce document embeddings which are better separated The results are shown in Table 1 and averaged over 5 randomized runs Performing the Welchs ttest both changes after weGAN training are statistically significant at a INLINEFORM0 significance level Because the Rand index captures matching accuracy we observe from the Table 1 that weGAN tends to improve both metrics Meanwhile we also wish to observe the spatial structure of the trained embeddings which can be explored by the synonyms of each word measured by the cosine similarity On average the top 10 synonyms of each word differ by INLINEFORM0 word after weGAN training and INLINEFORM1 of all words have different top 10 synonyms after training Therefore weGAN tends to provide small adjustments rather than structural changes Table 2 lists the 10 most similar terms of three terms Obama Trump and US before and after weGAN training ordered by cosine similarity We observe from Table 2 that for Obama Trump and Tillerson are more similar after weGAN training which means that the structure of the weGAN embeddings can be more uptodate For Trump we observe that Clinton is not among the synonyms before but is after which shows that the synonyms after are more relevant For US we observe that after training American replaces British in the list of synonyms which is also more relevant We next discuss deGAN In Table 3 we compare the performance of finetuning the discriminator of deGAN for document classification and the performance of the FFNN initialized with word2vec The change is also statistically significant at the INLINEFORM0 level From Table 3 we observe that deGAN improves the accuracy of supervised learning To compare the generated samples from deGAN with the original bagofwords we randomly select one record in each original and artificial corpus The records are represented by the most frequent words sorted by frequency in descending order where the stop words are removed The bagofwords embeddings are shown in Table 4 From Table 4 we observe that the bagofwords embeddings of the original documents tend to contain more name entities while those of the artificial deGAN documents tend to be more general There are many additional examples not shown here with observed artificial bagofwords embeddings having many name entities such as Turkey ISIS etc from generated documents eg Syria eventually ISIS US details jet aircraft October video extremist We also perform dimensional reduction using tSNE van der Maaten and Hinton 2008 and plot 100 random samples from each original or artificial category The original samples are shown in red and the generated ones are shown in blue in Figure 3 We do not further distinguish the categories because there is no clear distinction between the three original corpora politics world and US The results are shown in Figure 3 We observe that the original and artificial examples are generally mixed together and not well separable which means that the artificial examples are similar to the original ones However we also observe that the artificial samples tend to be more centered and have no outliers represented by the outermost red oval In the TIME data set we collected all news links on timecom in the GDELT 10 Event Database from April 1st 2013 to July 7 2017 We then collected the news articles from the links and kept those belonging to the five largest categories Entertainment Ideas Politics US and World We divided these documents into INLINEFORM0 training documents from which INLINEFORM1 validation documents are held out and INLINEFORM2 testing documents Table 5 compares the clustering results of word2vec and weGAN and the classification accuracy of an FFNN initialized with word2vec finetuned weGAN and finetuned deGAN The results in Table 5 are the counterparts of Table 1 and Table 3 for the TIME data set The differences are also significant at the INLINEFORM0 level From Table 5 we observe that both GAN models yield improved performance of supervised learning For weGAN on an average the top 10 synonyms of each word differ by INLINEFORM0 word after weGAN training and INLINEFORM1 of all words have different top 10 synonyms after training We also compare the synonyms of the same common words Obama Trump and US which are listed in Table 6 In the TIME data set for Obama Reagan is ranked slightly higher as an American president For Trump Bush and Sanders are ranked higher as American presidents or candidates For US we note that Pentagon is ranked higher after weGAN training which we think is also reasonable because the term is closely related to the US government For deGAN we also compare the original and artificial samples in terms of the highest probability words Table 7 shows one record for each category From Table 7 we observe that the produced bagofwords are generally alike and the words in the same sample are related to each other to some extent We also perform dimensional reduction using tSNE for 100 examples per corpus and plot them in Figure 4 We observe that the points are generated mixed but deGAN cannot reproduce the outliers The 20 Newsgroups data set is a collection of news documents with 20 categories To reduce the number of categories so that the GAN models are more compact and have more samples per corpus we grouped the documents into 6 supercategories religion computer cars sport science and politics misc is ignored because of its noisiness We considered each supercategory as a different corpora We then divided these documents into INLINEFORM0 training documents from which INLINEFORM1 validation documents are held out and INLINEFORM2 testing documents We train weGAN and deGAN in the the beginning of Section 4 except that we use a learning rate of INLINEFORM3 for the discriminator in deGAN to stabilize the cost function Table 8 compares the clustering results of word2vec and weGAN and the classification accuracy of the FFNN initialized with word2vec finetuned weGAN and finetuned deGAN All comparisons are statistically significant at the INLINEFORM4 level The other results are similar to the previous two data sets and are thereby omitted here The Reuters21578 data set is a collection of newswire articles Because the data set is highly skewed we considered the eight categories with more than 100 training documents earn acq crude trade moneyfx interest moneysupply and ship We then divided these documents into INLINEFORM0 training documents from which 692 validation documents are held out and INLINEFORM1 testing documents We train weGAN and deGAN in the same way as in the 20 Newsgroups data set Table 9 compares the clustering results of word2vec and weGAN and the classification accuracy of the FFNN initialized with word2vec finetuned weGAN and finetuned deGAN All comparisons are statistically significant at the INLINEFORM2 level except the Rand index The other results are similar to the CNN and TIME data sets and are thereby omitted here In this paper we have demonstrated the application of the GAN model on text data with multiple corpora We have shown that the GAN model is not only able to generate images but also able to refine word embeddings and generate document embeddings Such models can better learn the inner structure of multicorpus text data and also benefit supervised learning The improvements in supervised learning are not large but statistically significant The weGAN model outperforms deGAN in terms of supervised learning for 3 out of 4 data sets and is thereby recommended The synonyms from weGAN also tend to be more relevant than the original word2vec model The tSNE plots show that our generated document embeddings are similarly distributed as the original ones M Arjovsky S Chintala and L Bottou 2017 Wasserstein GAN arXiv170107875 D Blei A Ng and M Jordan 2003 Latent Dirichlet Allocation Journal of Machine Learning Research 39931022 R Collobert J Weston L Bottou M Karlen K Kavukcuoglu and P Kuksa 2011 Natural Language Processing Almost from Scratch Journal of Machine Learning Research 1224932537 I Goodfellow J PougetAbadie M Mirza B Xu D WardeFarley S Ozair A Courville and Y Bengio 2014 Generative Adversarial Nets In Advances in Neural Information Processing Systems 27 NIPS 2014 J Glover 2016 Modeling documents with Generative Adversarial Networks In Workshop on Adversarial Training NIPS 2016 S Hochreiter and J Schmidhuber 1997 Long Shortterm Memory In Neural Computation 917351780 Y Kim Convolutional Neural Networks for Sentence Classification 2014 In The 2014 Conference on Empirical Methods on Natural Language Processing EMNLP 2014 Q Le and T Mikolov 2014 Distributed Representations of Sentences and Documents In Proceedings of the 31st International Conference on Machine Learning ICML 2014 J Li W Monroe T Shi A Ritter and D Jurafsky 2017 Adversarial Learning for Neural Dialogue Generation arXiv170106547 MY Liu and O Tuzel 2016 Coupled Generative Adversarial Networks In Advances in Neural Information Processing Systems 29 NIPS 2016 X Mao Q Li H Xie R Lau Z Wang and S Smolley 2017 Least Squares Generative Adversarial Networks arXiv161104076 T Mikolov I Sutskever K Chen G Corrado and J Dean 2013 Distributed Embeddings of Words and Phrases and Their Compositionality In Advances in Neural Information Processing Systems 26 NIPS 2013 T Mikolov K Chen G Corrado and J Dean 2013b Efficient Estimation of Word Representations in Vector Space In Workshop ICLR 2013 M Mirza S Osindero 2014 Conditional Generative Adversarial Nets arXiv14111784 A Odena 2016 Semisupervised Learning with Generative Adversarial Networks arXiv1606 01583 J Pennington R Socher and C Manning Glove Global vectors for word representation 2014 In Empirical Methods in Natural Language Processing EMNLP 2014 O Press A Bar B Bogin J Berant and L Wolf 2017 Language Generation with Recurrent Generative Adversarial Networks without Pretraining In 1st Workshop on Subword and Character level models in NLP EMNLP 2017 S Rajeswar S Subramanian F Dutil C Pal and A Courville 2017 Adversarial Generation of Natural Language arXiv170510929 W Rand 1971 Objective Criteria for the Evaluation of Clustering Methods Journal of the American Statistical Association 66846850 T Salimans I Goodfellow W Zaremba V Cheung A Radford X Chen 2016 Improved Techniques for Training GANs In Advances in Neural Information Processing Systems 29 NIPS 2016 R Socher A Perelygin Alex J Wu J Chuang C Manning A Ng and C Potts 2013 Recursive deep models for semantic compositionality over a sentiment treebank In Conference on Empirical Methods in Natural Language Processing EMNLP 2013 J Springenberg 2016 Unsupervised and Semisupervised Learning with Categorical Generative Adversarial Networks In 4th International Conference on Learning embeddings ICLR 2016 L van der Maaten and G Hinton 2008 Visualizing Data using tSNE Journal of Machine Learning Research 925792605 B Wang K Liu and J Zhao 2016 Conditional Generative Adversarial Networks for Commonsense Machine Comprehension In TwentySixth International Joint Conference on Artificial Intelligence IJCAI17 Y Zhang Z Gan and L Carin 2016 Generating Text via Adversarial Training In Workshop on Adversarial Training NIPS 2016 J Zhao M Mathieu and Y LeCun 2017 Energybased Generative Adversarial Networks In 5th International Conference on Learning embeddings ICLR 2017 </s>
<s>[INST] Stacked DeBERT: All Attention in Incomplete Data for Text Classification [/INST] Understanding a users intent and sentiment is of utmost importance for current intelligent chatbots to respond appropriately to human requests However current systems are not able to perform to their best capacity when presented with incomplete data meaning sentences with missing or incorrect words This scenario is likely to happen when one considers human error done in writing In fact it is rather naive to assume that users will always type fully grammatically correct sentences Panko BIBREF0 goes as far as claiming that human accuracy regarding research paper writing is none when considering the entire document This has been aggravated with the advent of internet and social networks which allowed language and modern communication to be been rapidly transformed BIBREF1 BIBREF2 Take Twitter for instance where information is expected to be readily communicated in short and concise sentences with little to no regard to correct sentence grammar or word spelling BIBREF3 Further motivation can be found in Automatic Speech Recognition ASR applications where high error rates prevail and pose an enormous hurdle in the broad adoption of speech technology by users worldwide BIBREF4 This is an important issue to tackle because in addition to more widespread user adoption improving SpeechtoText STT accuracy diminishes error propagation to modules using the recognized text With that in mind in order for current systems to improve the quality of their services there is a need for development of robust intelligent systems that are able to understand a user even when faced with incomplete representation in language The advancement of deep neural networks have immensely aided in the development of the Natural Language Processing NLP domain Tasks such as text generation sentence correction image captioning and text classification have been possible via models such as Convolutional Neural Networks and Recurrent Neural Networks BIBREF5 BIBREF6 BIBREF7 More recently stateoftheart results have been achieved with attention models more specifically Transformers BIBREF8 Surprisingly however there is currently no research on incomplete text classification in the NLP community Realizing the need of research in that area we make it the focus of this paper In this novel task the model aims to identify the users intent or sentiment by analyzing a sentence with missing andor incorrect words In the sentiment classification task the model aims to identify the users sentiment given a tweet written in informal language and without regards for sentence correctness Current approaches for Text Classification tasks focus on efficient embedding representations Kim et al BIBREF9 use semantically enriched word embeddings to make synonym and antonym word vectors respectively more and less similar in order to improve intent classification performance Devlin et al BIBREF10 propose Bidirectional Encoder Representations from Transformers BERT a powerful bidirectional language representation model based on Transformers achieving stateoftheart results on eleven NLP tasks BIBREF11 including sentiment text classification Concurrently Shridhar et al BIBREF12 also reach state of the art in the intent recognition task using Semantic Hashing for feature representation followed by a neural classifier All aforementioned approaches are however applied to datasets based solely on complete data The incomplete data problem is usually approached as a reconstruction or imputation task and is most often related to missing numbers imputation BIBREF13 Vincent et al BIBREF14 BIBREF15 propose to reconstruct clean data from its noisy version by mapping the input to meaningful representations This approach has also been shown to outperform other models such as predictive mean matching random forest Support Vector Machine SVM and Multiple imputation by Chained Equations MICE at missing data imputation tasks BIBREF16 BIBREF17 Researchers in those two areas have shown that meaningful feature representation of data is of utter importance for high performance achieving methods We propose a model that combines the power of BERT in the NLP domain and the strength of denoising strategies in incomplete data reconstruction to tackle the tasks of incomplete intent and sentiment classification This enables the implementation of a novel encoding scheme more robust to incomplete data called Stacked Denoising BERT or Stacked DeBERT Our approach consists of obtaining richer input representations from input tokens by stacking denoising transformers on an embedding layer with vanilla transformers The embedding layer and vanilla transformers extract intermediate input features from the input tokens and the denoising transformers are responsible for obtaining richer input representations from them By improving BERT with stronger denoising abilities we are able to reconstruct missing and incorrect words embeddings and improve classification accuracy To summarize our contribution is twofold Novel model architecture that is more robust to incomplete data including missing or incorrect words in text Proposal of the novel tasks of incomplete intent and sentiment classification from incorrect sentences and release of corpora related with these tasks The remainder of this paper is organized in four sections with Section SECREF2 explaining the proposed model This is followed by Section SECREF3 which includes a detailed description of the dataset used for training and evaluation purposes and how it was obtained Section SECREF4 covers the baseline models used for comparison training specifications and experimental results Finally Section SECREF5 wraps up this paper with conclusion and future works We propose Stacked Denoising BERT DeBERT as a novel encoding scheming for the task of incomplete intent classification and sentiment classification from incorrect sentences such as tweets and text with STT error The proposed model illustrated in Fig FIGREF4 is structured as a stacking of embedding layers and vanilla transformer layers similarly to the conventional BERT BIBREF10 followed by layers of novel denoising transformers The main purpose of this model is to improve the robustness and efficiency of BERT when applied to incomplete data by reconstructing hidden embeddings from sentences with missing words By reconstructing these hidden embeddings we are able to improve the encoding scheme in BERT The initial part of the model is the conventional BERT a multilayer bidirectional Transformer encoder and a powerful language model During training BERT is finetuned on the incomplete text classification corpus see Section SECREF3 The first layer preprocesses the input sentence by making it lowercase and by tokenizing it It also prefixes the sequence of tokens with a special character CLS and sufixes each sentence with a SEP character It is followed by an embedding layer used for input representation with the final input embedding being a sum of token embedddings segmentation embeddings and position embeddings The first one token embedding layer uses a vocabulary dictionary to convert each token into a more representative embedding The segmentation embedding layer indicates which tokens constitute a sentence by signaling either 1 or 0 In our case since our data are formed of single sentences the segment is 1 until the first SEP character appears indicating segment A and then it becomes 0 segment B The position embedding layer as the name indicates adds information related to the tokens position in the sentence This prepares the data to be considered by the layers of vanilla bidirectional transformers which outputs a hidden embedding that can be used by our novel layers of denoising transformers Although BERT has shown to perform better than other baseline models when handling incomplete data it is still not enough to completely and efficiently handle such data Because of that there is a need for further improvement of the hidden feature vectors obtained from sentences with missing words With this purpose in mind we implement a novel encoding scheme consisting of denoising transformers which is composed of stacks of multilayer perceptrons for the reconstruction of missing words embeddings by extracting more abstract and meaningful hidden feature vectors and bidirectional transformers for improved embedding representation The embedding reconstruction step is trained on sentence embeddings extracted from incomplete data hinc as input and embeddings corresponding to its complete version hcomp as target Both input and target are obtained after applying the embedding layers and the vanilla transformers as indicated in Fig FIGREF4 and have shape Nbs 768 128 where Nbs is the batch size 768 is the original BERT embedding size for a single token and 128 is the maximum sequence length in a sentence The stacks of multilayer perceptrons are structured as two sets of three layers with two hidden layers each The first set is responsible for compressing the hinc into a latentspace representation extracting more abstract features into lower dimension vectors z1 z2 and mathbf z with shape Nbs 128 128 Nbs 32 128 and Nbs 12 128 respectively This process is shown in Eq DISPLAYFORM5 where fcdot is the parameterized function mapping hinc to the hidden state mathbf z The second set then respectively reconstructs z1 z2 and mathbf z into hrec1 hrec2 and hrec This process is shown in Eq DISPLAYFORM6 where gcdot is the parameterized function that reconstructs mathbf z as hrec The reconstructed hidden sentence embedding hrec is compared with the complete hidden sentence embedding hcomp through a mean square error loss function as shown in Eq DISPLAYFORM7 After reconstructing the correct hidden embeddings from the incomplete sentences the correct hidden embeddings are given to bidirectional transformers to generate input representations The model is then finetuned in an endtoend manner on the incomplete text classification corpus Classification is done with a feedforward network and softmax activation function Softmax sigma is a discrete probability distribution function for NC classes with the sum of the classes probability being 1 and the maximum value being the predicted class The predicted class can be mathematically calculated as in Eq DISPLAYFORM8 where o W t b the output of the feedforward layer used for classification In order to evaluate the performance of our model we need access to a naturally noisy dataset with real human errors Poor quality texts obtained from Twitter called tweets are then ideal for our task For this reason we choose Kaggles twoclass Sentiment140 dataset BIBREF18 which consists of spoken text being used in writing and without strong consideration for grammar or sentence correctness Thus it has many mistakes as specified in Table TABREF11 Even though this corpus has incorrect sentences and their emotional labels they lack their respective corrected sentences necessary for the training of our model In order to obtain this missing information we outsource native English speakers from an unbiased and anonymous platform called Amazon Mechanical Turk MTurk BIBREF19 which is a paid marketplace for Human Intelligence Tasks HITs We use this platform to create tasks for native English speakers to format the original incorrect tweets into correct sentences Some examples are shown in Table TABREF12 After obtaining the correct sentences our twoclass dataset has class distribution as shown in Table TABREF14 There are 200 sentences used in the training stage with 100 belonging to the positive sentiment class and 100 to the negative class and 50 samples being used in the evaluation stage with 25 negative and 25 positive This totals in 300 samples with incorrect and correct sentences combined Since our goal is to evaluate the models performance and robustness in the presence of noise we only consider incorrect data in the testing phase Note that BERT is a pretrained model meaning that small amounts of data are enough for appropriate finetuning In the intent classification task we are presented with a corpus that suffers from the opposite problem of the Twitter sentiment classification corpus In the intent classification corpus we have the complete sentences and intent labels but lack their corresponding incomplete sentences and since our task revolves around text classification in incomplete or incorrect data it is essential that we obtain this information To remedy this issue we apply a TexttoSpeech TTS module followed by a SpeechtoText STT module to the complete sentences in order to obtain incomplete sentences with STT error Due to TTS and STT modules available being imperfect the resulting sentences have a reasonable level of noise in the form of missing or incorrectly transcribed words Analysis on this dataset adds value to our work by enabling evaluation of our models robustness to different rates of data incompleteness The dataset used to evaluate the models performance is the Chatbot Natural Language Unerstanding NLU Evaluation Corpus introduced by Braun et al BIBREF20 to test NLU services It is a publicly available benchmark and is composed of sentences obtained from a German Telegram chatbot used to answer questions about public transport connections The dataset has two intents namely Departure Time and Find Connection with 100 train and 106 test samples shown in Table TABREF18 Even though English is the main language of the benchmark this dataset contains a few German station and street names The incomplete dataset used for training is composed of lowercased incomplete data obtained by manipulating the original corpora The incomplete sentences with STT error are obtained in a 2step process shown in Fig FIGREF22 The first step is to apply a TTS module to the available complete sentence Here we apply gtts a Google TexttoSpeech python library and macsay a terminal command available in Mac OS as say The second step consists of applying an STT module to the obtained audio files in order to obtain text containing STT errors The STT module used here was witai freely available and maintained by Witai The mentioned TTS and STT modules were chosen according to code availability and whether its freely available or has high daily usage limitations Table TABREF24 exemplifies a complete and its respective incomplete sentences with different TTSSTT combinations thus varying rates of missing and incorrect words The level of noise in the STT imbued sentences is denoted by a inverted BLEU iBLEU score ranging from 0 to 1 The inverted BLEU score is denoted in Eq DISPLAYFORM23 where BLEU is a common metric usually used in machine translation tasks BIBREF21 We decide to showcase that instead of regular BLEU because it is more indicative to the amount of noise in the incomplete text where the higher the iBLEU the higher the noise Besides the already mentioned BERT the following baseline models are also used for comparison We focus on the three following services where the first two are commercial services and last one is open source with two separate backends Google Dialogflow formerly Apiai SAP Conversational AI formerly Recastai and Rasa spacy and tensorflow backend Shridhar et al BIBREF12 proposed a word embedding method that doesnt suffer from outofvocabulary issues The authors achieve this by using hash tokens in the alphabet instead of a single word making it vocabulary independent For classification classifiers such as Multilayer Perceptron MLP Support Vector Machine SVM and Random Forest are used A complete list of classifiers and training specifications are given in Section SECREF31 The baseline and proposed models are each trained 3 separate times for the incomplete intent classification task complete data and one for each of the TTSSTT combinations gttswitai and macsaywitai Regarding the sentiment classification from incorrect sentences task the baseline and proposed models are each trained 3 times original text corrected text and incorrect with correct texts The reported F1 scores are the best accuracies obtained from 10 runs No settable training configurations available in the online platforms Trained on 3gram feature vector size of 768 as to match the BERT embedding size and 13 classifiers with parameters set as specified in the authors paper so as to allow comparison MLP with 3 hidden layers of sizes 300 100 50 respectively Random Forest with 50 estimators or trees 5fold Grid Search with Random Forest classifier and estimator 50 60 70 Linear Support Vector Classifier with L1 and L2 penalty and tolerance of 103 Regularized linear classifier with Stochastic Gradient Descent SGD learning with regularization term alpha104 and L1 L2 and ElasticNet penalty Nearest Centroid with Euclidian metric where classification is done by representing each class with a centroid Bernoulli Naive Bayes with smoothing parameter alpha102 Kmeans clustering with 2 clusters and L2 penalty and Logistic Regression classifier with L2 penalty tolerance of 104 and regularization term of 10 Most often the best performing classifier was MLP Conventional BERT is a BERTbaseuncased model meaning that it has 12 transformer blocks L hidden size H of 768 and 12 selfattention heads A The model is finetuned with our dataset on 2 Titan X GPUs for 3 epochs with Adam Optimizer learning rate of 2105 maximum sequence length of 128 and warm up proportion of 01 The train batch size is 4 for the Twitter Sentiment Corpus and 8 for the Chatbot Intent Classification Corpus Our proposed model is trained in endtoend manner on 2 Titan X GPUs with training time depending on the size of the dataset and train batch size The stack of multilayer perceptrons are trained for 100 and 1000 epochs with Adam Optimizer learning rate of 103 weight decay of 105 MSE loss criterion and batch size the same as BERT 4 for the Twitter Sentiment Corpus and 8 for the Chatbot Intent Classification Corpus Experimental results for the Twitter Sentiment Classification task on Kaggles Sentiment140 Corpus dataset displayed in Table TABREF37 show that our model has better F1micros scores outperforming the baseline models by 6 to 8 We evaluate our model and baseline models on three versions of the dataset The first one Inc only considers the original data containing naturally incorrect tweets and achieves accuracy of 80 against BERTs 72 The second version Corr considers the corrected tweets and shows higher accuracy given that it is less noisy In that version Stacked DeBERT achieves 82 accuracy against BERTs 76 an improvement of 6 In the last case IncCorr we consider both incorrect and correct tweets as input to the models in hopes of improving performance However the accuracy was similar to the first aforementioned version 80 for our model and 74 for the second highest performing model Since the first and last corpus gave similar performances with our model we conclude that the Twitter dataset does not require complete sentences to be given as training input in addition to the original naturally incorrect tweets in order to better model the noisy sentences In addition to the overall F1score we also present a confusion matrix in Fig FIGREF38 with the perclass F1scores for BERT and Stacked DeBERT The normalized confusion matrix plots the predicted labels versus the targettarget labels Similarly to Table TABREF37 we evaluate our model with the original Twitter dataset the corrected version and both original and corrected tweets It can be seen that our model is able to improve the overall performance by improving the accuracy of the lower performing classes In the Inc dataset the true class 1 in BERT performs with approximately 50 However Stacked DeBERT is able to improve that to 72 although to a cost of a small decrease in performance of class 0 A similar situation happens in the remaining two datasets with improved accuracy in class 0 from 64 to 84 and 60 to 76 respectively Experimental results for the Intent Classification task on the Chatbot NLU Corpus with STT error can be seen in Table TABREF40 When presented with data containing STT error our model outperforms all baseline models in both combinations of TTSSTT gttswitai outperforms the second placing baseline model by 094 with F1score of 9717 and macsaywitai outperforms the next highest achieving model by 189 with F1score of 9623 The table also indicates the level of noise in each dataset with the already mentioned iBLEU score where 0 means no noise and higher values mean higher quantity of noise As expected the models accuracy degrade with the increase in noise thus F1scores of gttswitai are higher than macsaywitai However while the other models decay rapidly in the presence of noise our model does not only outperform them but does so with a wider margin This is shown with the increasing robustness curve in Fig FIGREF41 and can be demonstrated by macsaywitai outperforming the baseline models by twice the gap achieved by gttswitai Further analysis of the results in Table TABREF40 show that BERT decay is almost constant with the addition of noise with the difference between the complete data and gttswitai being 188 and gttswitai and macsaywitai being 189 Whereas in Stacked DeBERT that difference is 189 and 094 respectively This is stronger indication of our models robustness in the presence of noise Additionally we also present Fig FIGREF42 with the normalized confusion matrices for BERT and Stacked DeBERT for sentences containing STT error Analogously to the Twitter Sentiment Classification task the perclass F1scores show that our model is able to improve the overall performance by improving the accuracy of one class while maintaining the highachieving accuracy of the second one In this work we proposed a novel deep neural network robust to noisy text in the form of sentences with missing andor incorrect words called Stacked DeBERT The idea was to improve the accuracy performance by improving the representation ability of the model with the implementation of novel denoising transformers More specifically our model was able to reconstruct hidden embeddings from their respective incomplete hidden embeddings Stacked DeBERT was compared against three NLU service platforms and two other machine learning methods namely BERT and Semantic Hashing with neural classifier Our model showed better performance when evaluated on F1 scores in both Twitter sentiment and intent text with STT error classification tasks The perclass F1 score was also evaluated in the form of normalized confusion matrices showing that our model was able to improve the overall performance by better balancing the accuracy of each class tradingoff small decreases in high achieving class for significant improvements in lower performing ones In the Chatbot dataset accuracy improvement was achieved even without tradeoff with the highest achieving classes maintaining their accuracy while the lower achieving class saw improvement Further evaluation on the F1scores decay in the presence of noise demonstrated that our model is more robust than the baseline models when considering noisy data be that in the form of incorrect sentences or sentences with STT error Not only that experiments on the Twitter dataset also showed improved accuracy in clean data with complete sentences We infer that this is due to our model being able to extract richer data representations from the input data regardless of the completeness of the sentence For future works we plan on evaluating the robustness of our model against other types of noise such as word reordering word insertion and spelling mistakes in sentences In order to improve the performance of our model further experiments will be done in search for more appropriate hyperparameters and more complex neural classifiers to substitute the last feedforward network layer This work was partly supported by Institute of Information Communications Technology Planning Evaluation IITP grant funded by the Korea government MSIT 2016000564 Development of Intelligent Interaction Technology Based on Context Awareness and Human Intention Understanding and Korea Evaluation Institute of Industrial Technology KEIT grant funded by the Korea government MOTIE 50 and the Technology Innovation Program Industrial Strategic Technology Development Program No 10073162 funded By the Ministry of Trade Industry Energy MOTIE Korea 50 </s>
<s>[INST] Gunrock: A Social Bot for Complex and Engaging Long Conversations [/INST] Amazon Alexa Prize BIBREF0 provides a platform to collect real humanmachine conversation data and evaluate performance on speechbased social conversational systems Our system Gunrock BIBREF1 addresses several limitations of prior chatbots BIBREF2 BIBREF3 BIBREF4 including inconsistency and difficulty in complex sentence understanding eg long utterances and provides several contributions First Gunrocks multistep language understanding modules enable the system to provide more useful information to the dialog manager including a novel dialog act scheme Additionally the natural language understanding NLU module can handle more complex sentences including those with coreference Second Gunrock interleaves actions to elicit users opinions and provide responses to create an indepth engaging conversation while a related strategy to interleave task and nontask functions in chatbots has been proposed BIBREF5 no chatbots to our knowledge have employed a factopinion interleaving strategy Finally we use an extensive persona database to provide coherent profile information a critical challenge in building social chatbots BIBREF3 Compared to previous systems BIBREF4 Gunrock generates more balanced conversations between human and machine by encouraging and understanding more human inputs see Table TABREF2 for an example Figure FIGREF3 provides an overview of Gunrocks architecture We extend the Amazon Conversational Bot Toolkit CoBot BIBREF6 which is a flexible eventdriven framework CoBot provides ASR results and natural language processing pipelines through the Alexa Skills Kit ASK BIBREF7 Gunrock corrects ASR according to the context asr and creates a natural language understanding NLU nlu module where multiple components analyze the user utterances A dialog manager DM dm uses features from NLU to select topic dialog modules and defines an individual dialog flow Each dialog module leverages several knowledge bases knowledge Then a natural language generation NLG nlg module generates a corresponding response Finally we markup the synthesized responses and return to the users through text to speech TTS tts While we provide an overview of the system in the following sections for detailed system implementation details please see the technical report BIBREF1 Gunrock receives ASR results with the raw text and timestep information for each word in the sequence without case information and punctuation Keywords especially named entities such as movie names are prone to generate ASR errors without contextual information but are essential for NLU and NLG Therefore Gunrock uses domain knowledge to correct these errors by comparing noun phrases to a knowledge base eg a list of the most popular movies names based on their phonetic information We extract the primary and secondary code using The Double Metaphone Search Algorithm BIBREF8 for noun phrases extracted by noun trunks and the selected knowledge base and suggest a potential fix by code matching An example can be seen in User3 and Gunrock3 in Table TABREF2 Gunrock is designed to engage users in deeper conversation accordingly a user utterance can consist of multiple units with complete semantic meanings We first split the corrected raw ASR text into sentences by inserting break tokens An example is shown in User3 in Table TABREF2 Meanwhile we mask named entities before segmentation so that a named entity will not be segmented into multiple parts and an utterance with a complete meaning is maintained egi like the movie a star is born We also leverage timestep information to filter out false positive corrections After segmentation our coreference implementation leverages entity knowledge such as person versus event and replaces nouns with their actual reference by entity ranking We implement coreference resolution on entities both within segments in a single turn as well as across multiple turns For instance him in the last segment in User5 is replaced with bradley cooper in Table TABREF2 Next we use a constituency parser to generate noun phrases from each modified segment Within the sequence pipeline to generate complete segments Gunrock detects 1 topic 2 named entities and 3 sentiment using ASK in parallel The NLU module uses knowledge graphs including Google Knowledge Graph to call for a detailed description of each noun phrase for understanding In order to extract the intent for each segment we designed MIDAS a humanmachine dialog act scheme with 23 tags and implemented a multilabel dialog act classification model using contextual information BIBREF9 Next the NLU components analyzed on each segment in a user utterance are sent to the DM and NLG module for state tracking and generation respectively We implemented a hierarchical dialog manager consisting of a high level and low level DMs The former leverages NLU outputs for each segment and selects the most important segment for the system as the central element using heuristics For example i just finished reading harry potter triggers SubDM Books Utilizing the central element and features extracted from NLU input utterances are mapped onto 11 possible topic dialog modules eg movies books animals etc including a backup module retrieval Low level dialog management is handled by the separate topic dialog modules which use modular finite state transducers to execute various dialog segments processed by the NLU Using topicspecific modules enables deeper conversations that maintain the context We design dialog flows in each of the finite state machines as well Dialog flow is determined by rulebased transitions between a specified fixed set of dialog states To ensure that our states and transitions are effective we leverage large scale user data to find high probability responses and high priority responses to handle in different contexts Meanwhile dialog flow is customized to each user by tracking user attributes as dialog context In addition each dialog flow is adaptive to user responses to show acknowledgement and understanding eg talking about pet ownership in the animal module Based on the user responses many dialog flow variations exist to provide a fresh experience each time This reduces the feeling of dialogs being scripted and repetitive Our dialog flows additionally interleave facts opinions experiences and questions to make the conversation flexible and interesting In the meantime we consider feedback signals such as continue and stop from the current topic dialog module indicating whether it is able to respond to the following request in the dialog flow in order to select the best response module Additionally in all modules we allow mixedinitiative interactions users can trigger a new dialog module when they want to switch topics while in any state For example users can start a new conversation about movies from any other topic module All topic dialog modules query knowledge bases to provide information to the user To respond to general factual questions Gunrock queries the EVI factual database as well as other uptodate scraped information appropriate for the submodule such as news and current showing movies in a specific location from databases including IMDB One contribution of Gunrock is the extensive Gunrock Persona Backstory database consisting of over 1000 responses to possible questions for Gunrock as well as reasoning for her responses for roughly 250 questions see Table 2 We designed the system responses to elicit a consistent personality within and across modules modeled as a female individual who is positive outgoing and is interested in science and technology In order to avoid repetitive and nonspecific responses commonly seen in dialog systems BIBREF10 Gunrock uses a template manager to select from a handcrafted response templates based on the dialog state One dialog state can map to multiple response templates with similar semantic or functional content but differing surface forms Among these response templates for the same dialog state one is randomly selected without repetition to provide variety unless all have been exhausted When a response template is selected any slots are substituted with actual contents including queried information for news and specific data for weather For example to ground a movie name due to ASR errors or multiple versions one template is Are you talking about movietitle released in releaseyear starring actorname as actorrole Modulespecific templates were generated for each topic eg animals but some of the templates are generalizable across different modules eg Whats your favorite movie book place to visit In many cases response templates corresponding to different dialog acts are dynamically composed to give the final response For example an appropriate acknowledgement for the users response can be combined with a predetermined followup question After NLG we adjust the TTS of the system to improve the expressiveness of the voice to convey that the system is an engaged and active participant in the conversation We use a rulebased system to systematically add interjections specifically Alexa Speechcons and fillers to approximate humanlike cognitiveemotional expression BIBREF11 For more on the framework and analysis of the TTS modifications see BIBREF12 From January 5 2019 to March 5 2019 we collected conversational data for Gunrock During this time no other code updates occurred We analyzed conversations for Gunrock with at least 3 user turns to avoid conversations triggered by accident Overall this resulted in a total of 34432 user conversations Together these users gave Gunrock an average rating of 365 median 40 which was elicited at the end of the conversation On a scale from 1 to 5 stars how do you feel about talking to this socialbot again Users engaged with Gunrock for an average of 2092 overall turns median 130 with an average of 698 words per utterance and had an average conversation time of 733 minutes median 287 min We conducted three principal analyses users response depth wordcount backstory queries backstorypersona and interleaving of personal and factual responses pets Two unique features of Gunrock are its ability to dissect longer complex sentences and its methods to encourage users to be active conversationalists elaborating on their responses In prior work even if users are able to drive the conversation often bots use simple yesno questions to control the conversational flow to improve understanding as a result users are more passive interlocutors in the conversation We aimed to improve user engagement by designing the conversation to have more openended opinionpersonal questions and show that the system can understand the users complex utterances See nlu for details on NLU Accordingly we ask if users speech behavior will reflect Gunrocks technical capability and conversational strategy producing longer sentences We assessed the degree of conversational depth by measuring users mean word count Prior work has found that an increase in word count has been linked to improved user engagement eg in a social dialog system BIBREF13 For each user conversation we extracted the overall rating the number of turns of the interaction and the users perutterance word count averaged across all utterances We modeled the relationship between word count and the two metrics of user engagement overall rating mean number of turns in separate linear regressions Results showed that users who on average produced utterances with more words gave significantly higher ratings beta 001 SE0002 t479 p0001see Figure 2 and engaged with Gunrock for significantly greater number of turns beta 185 SE005 t3558 p0001 see Figure 2 These results can be interpreted as evidence for Gunrocks ability to handle complex sentences where users are not constrained to simple responses to be understood and feel engaged in the conversation and evidence that individuals are more satisfied with the conversation when they take a more active role rather than the system dominating the dialog On the other hand another interpretation is that users who are more talkative may enjoy talking to the bot in general and thus give higher ratings in tandem with higher average word counts We assessed the users interest in Gunrock by tagging instances where the user triggered Gunrocks backstory eg Whats your favorite color For users with at least one backstory question we modeled overall log Rating with a linear regression by the log Number of Backstory Questions Asked log transformed due to the variables nonlinear relationship We hypothesized that users who show greater curiosity about Gunrock will display higher overall ratings for the conversation based on her responses Overall the number of times users queried Gunrocks backstory was strongly related to the rating they gave at the end of the interaction logbeta 010 SE0002 t584 p0001see Figure 3 This suggests that maintaining a consistent personality and having enough responses to questions the users are interested in may improve user satisfaction Gunrock includes a specific topic module on animals which includes a factual component where the system provides animal facts as well as a more personalized component about pets Our system is designed to engage users about animals in a more casual conversational style BIBREF14 eliciting followup questions if the user indicates they have a pet if we are able to extract the pets name we refer to it in the conversation eg Oliver is a great name for a cat How long have you had Oliver In cases where the user does not indicate that they have a pet the system solely provides animal facts Therefore the animal module can serve as a test of our interleaving strategy we hypothesized that combining facts and personal questions in this case about the users pet would lead to greater user satisfaction overall We extracted conversations where Gunrock asked the user if they had ever had a pet and categorized responses as Yes No or NA if users did not respond with an affirmative or negative response We modeled user rating with a linear regression model with predictor of Has Pet 2 levels Yes No We found that users who talked to Gunrock about their pet showed significantly higher overall ratings of the conversation beta 015 SE006 t253 p0016 see Figure 4 One interpretation is that interleaving factual information with more indepth questions about their pet result in improved user experience Yet another interpretation is that pet owners may be more friendly and amenable to a socialbot for example prior research has linked differences in personality to pet ownership BIBREF15 Gunrock is a social chatbot that focuses on having long and engaging speechbased conversations with thousands of real users Accordingly our architecture employs specific modules to handle longer and complex utterances and encourages users to be more active in a conversation Analysis shows that users speech behavior reflects these capabilities Longer sentences and more questions about Gunrockss backstory positively correlate with user experience Additionally we find evidence for interleaved dialog flow where combining factual information with personal opinions and stories improve user satisfaction Overall this work has practical applications in applying these design principles to other social chatbots as well as theoretical implications in terms of the nature of humancomputer interaction cf Computers are Social Actors BIBREF16 Our results suggest that users are engaging with Gunrock in similar ways to other humans in chitchat about general topics eg animals movies etc taking interest in Gunrocks backstory and persona and even producing more information about themselves in return We would like to acknowledge the help from Amazon in terms of financial and technical support </s>
<s>[INST] Towards Detection of Subjective Bias using Contextualized Word Embeddings [/INST] In natural language subjectivity refers to the aspects of communication used to express opinions evaluations and speculationsBIBREF0 often influenced by ones emotional state and viewpoints Writers and editors of texts like news and textbooks try to avoid the use of biased language yet subjective bias is pervasive in these texts More than 56 of Americans believe that news sources do not report the news objectively thus implying the prevalence of the bias Therefore when presenting factual information it becomes necessary to differentiate subjective language from objective language There has been considerable work on capturing subjectivity using textclassification models ranging from linguisticfeaturebased modelsBIBREF1 to finetuned pretrained word embeddings like BERTBIBREF2 The detection of biasinducing words in a Wikipedia statement was explored in BIBREF1 The authors propose the Neutral Point of View NPOV corpus made using Wikipedia revision history containing Wikipedia edits that are specifically designed to remove subjective bias They use logistic regression with linguistic features including factive verbs hedges and subjective intensifiers to detect biasinducing words In BIBREF2 the authors extend this work by mitigating subjective bias after detecting biasinducing words using a BERTbased model However they primarily focused on detecting and mitigating subjective bias for singleword edits We extend their work by incorporating multiword edits by detecting bias at the sentence level We further use their version of the NPOV corpus called Wiki Neutrality CorpusWNC for this work The task of detecting sentences containing subjective bias rather than individual words inducing the bias has been explored in BIBREF3 However they conduct majority of their experiments in controlled settings limiting the type of articles from which the revisions were extracted Their attempt to test their models in a general setting is dwarfed by the fact that they used revisions from a single Wikipedia article resulting in just 100 instances to evaluate their proposed models robustly Consequently we perform our experiments in the complete WNC corpus which consists of 423823 revisions in Wikipedia marked by its editors over a period of 15 years to simulate a more general setting for the bias In this work we investigate the application of BERTbased models for the task of subjective language detection We explore various BERTbased models including BERT RoBERTa ALBERT with their base and large specifications along with their native classifiers We propose an ensemble model exploiting predictions from these models using multiple ensembling techniques We show that our model outperforms the baselines by a margin of 56 of F1 score and 595 of Accuracy In this section we outline baseline models like BERTlarge We further propose three approaches optimized BERTbased models distilled pretrained models and the use of ensemble methods for the task of subjectivity detection FastTextBIBREF4 It uses bag of words and bag of ngrams as features for text classification capturing partial information about the local word order efficiently BiLSTM Unlike feedforward neural networks recurrent neural networks like BiLSTMs use memory based on history information to learn longdistance features and then predict the output We use a twolayer BiLSTM architecture with GloVe word embeddings as a strong RNN baseline BERT BIBREF5 It is a contextualized word representation model that uses bidirectional transformers pretrained on a large 33B word corpus We use the BERTlarge model finetuned on the training dataset Optimized BERTbased models We use BERTbased models optimized as in BIBREF6 and BIBREF7 pretrained on a dataset as large as twelve times as compared to BERTlarge with bigger batches and longer sequences ALBERT introduced in BIBREF7 uses factorized embedding parameterization and crosslayer parameter sharing for parameter reduction These optimizations have led both the models to outperform BERTlarge in various benchmarking tests like GLUE for text classification and SQuAD for Question Answering Distilled BERTbased models Secondly we propose to use distilled BERTbased models as introduced in BIBREF8 They are smaller generalpurpose language representation model pretrained by leveraging distillation knowledge This results in significantly smaller and faster models with performance comparable to their undistilled versions We finetune these pretrained distilled models on the training corpus to efficiently detect subjectivity BERTbased ensemble models Lastly we use the weightedaverage ensembling technique to exploit the predictions made by different variations of the above models Ensembling methodology entails engendering a predictive model by utilizing predictions from multiple models in order to improve Accuracy and F1 decrease variance and bias We experiment with variations of RoBERTalarge ALBERTxxlargev2 DistilRoBERTa and BERT and outline selected combinations in tabexperimentalresults We perform our experiments on the WNC dataset opensourced by the authors of BIBREF2 It consists of aligned pre and post neutralized sentences made by Wikipedia editors under the neutral point of view It contains 180k biased sentences and their neutral counterparts crawled from 423823 Wikipedia revisions between 2004 and 2019 We randomly shuffled these sentences and split this dataset into two parts in a 9010 TrainTest split and perform the evaluation on the heldout test dataset For all BERTbased models we use a learning rate of 2105 a maximum sequence length of 50 and a weight decay of 001 while finetuning the model We use FastTexts recently opensourced automatic hyperparameter optimization functionality while training the model For the BiLSTM baseline we use a dropout of 005 along with a recurrent dropout of 02 in two 64 unit sized stacked BiLSTMs using softmax activation layer as the final dense layer tabexperimentalresults shows the performance of different models on the WNC corpus evaluated on the following four metrics Precision Recall F1 and Accuracy Our proposed methodology the use of finetuned optimized BERT based models and BERTbased ensemble models outperform the baselines for all the metrics Among the optimized BERT based models RoBERTalarge outperforms all other nonensemble models and the baselines for all metrics It further achieves a maximum recall of 0681 for all the proposed models We note that DistillRoBERTa a distilled model performs competitively achieving 6969 accuracy and 0672 F1 score This observation shows that distilled pretrained models can replace their undistilled counterparts in a lowcomputing environment We further observe that ensemble models perform better than optimized BERTbased models and distilled pretrained models Our proposed ensemble comprising of RoBERTalarge ALBERTxxlargev2 DistilRoBERTa and BERT outperforms all the proposed models obtaining 0704 F1 score 0733 precision and 7161 Accuracy In this paper we investigated BERTbased architectures for sentence level subjective bias detection We perform our experiments on a general Wikipedia corpus consisting of more than 360k pre and post subjective bias neutralized sentences We found our proposed architectures to outperform the existing baselines significantly BERTbased ensemble consisting of RoBERTa ALBERT DistillRoBERTa and BERT led to the highest F1 and Accuracy In the future we would like to explore documentlevel detection of subjective bias multiword mitigation of the bias applications of detecting the bias in recommendation systems </s>
<s>[INST] Sentence-Level Fluency Evaluation: References Help, But Can Be Spared! [/INST] Producing sentences which are perceived as natural by a human addresseea property which we will denote as fluency throughout this paper is a crucial goal of all natural language generation NLG systems it makes interactions more natural avoids misunderstandings and overall leads to higher user satisfaction and user trust BIBREF0 Thus fluency evaluation is important eg during system development or for filtering unacceptable generations at application time However fluency evaluation of NLG systems constitutes a hard challenge systems are often not limited to reusing words from the input but can generate in an abstractive way Hence it is not guaranteed that a correct output will match any of a finite number of given references This results in difficulties for current referencebased evaluation especially of fluency causing wordoverlap metrics like ROUGE BIBREF1 to correlate only weakly with human judgments BIBREF2 As a result fluency evaluation of NLG is often done manually which is costly and timeconsuming Evaluating sentences on their fluency on the other hand is a linguistic ability of humans which has been the subject of a decadelong debate in cognitive science In particular the question has been raised whether the grammatical knowledge that underlies this ability is probabilistic or categorical in nature BIBREF3 BIBREF4 BIBREF5 Within this context lau2017grammaticality have recently shown that neural language models LMs can be used for modeling human ratings of acceptability Namely they found SLOR BIBREF6 sentence logprobability which is normalized by unigram logprobability and sentence lengthto correlate well with acceptability judgments at the sentence level However to the best of our knowledge these insights have so far gone disregarded by the natural language processing NLP community In this paper we investigate the practical implications of lau2017grammaticalitys findings for fluency evaluation of NLG using the task of automatic compression BIBREF7 BIBREF8 as an example cf Table 1 Specifically we test our hypothesis that SLOR should be a suitable metric for evaluation of compression fluency which i does not rely on references ii can naturally be applied at the sentence level in contrast to the system level and iii does not need human fluency annotations of any kind In particular the first aspect ie SLOR not needing references makes it a promising candidate for automatic evaluation Getting rid of human references has practical importance in a variety of settings eg if references are unavailable due to a lack of resources for annotation or if obtaining references is impracticable The latter would be the case for instance when filtering system outputs at application time We further introduce WPSLOR a novel WordPiece BIBREF9 based version of SLOR which drastically reduces model size and training time Our experiments show that both approaches correlate better with human judgments than traditional wordoverlap metrics even though the latter do rely on reference compressions Finally investigating the case of available references and how to incorporate them we combine WPSLOR and ROUGE to ROUGELM a novel referencebased metric and increase the correlation with human fluency ratings even further Acceptability judgments ie speakers judgments of the wellformedness of sentences have been the basis of much linguistics research BIBREF10 BIBREF11 a speakers intuition about a sentence is used to draw conclusions about a languages rules Commonly acceptability is used synonymously with grammaticality and speakers are in practice asked for grammaticality judgments or acceptability judgments interchangeably Strictly speaking however a sentence can be unacceptable even though it is grammatical a popular example is Chomskys phrase Colorless green ideas sleep furiously BIBREF3 In turn acceptable sentences can be ungrammatical eg in an informal context or in poems BIBREF12 Scientistslinguists cognitive scientists psychologists and NLP researcher alikedisagree about how to represent human linguistic abilities One subject of debates are acceptability judgments while for many acceptability is a binary condition on membership in a set of wellformed sentences BIBREF3 others assume that it is gradient in nature BIBREF13 BIBREF2 Tackling this research question lau2017grammaticality aimed at modeling human acceptability judgments automatically with the goal to gain insight into the nature of human perception of acceptability In particular they tried to answer the question Do humans judge acceptability on a gradient scale Their experiments showed a strong correlation between human judgments and normalized sentence logprobabilities under a variety of LMs for artificial data they had created by translating and backtranslating sentences with neural models While they tried different types of LMs best results were obtained for neural models namely recurrent neural networks RNNs In this work we investigate if approaches which have proven successful for modeling acceptability can be applied to the NLP problem of automatic fluency evaluation In this section we first describe SLOR and the intuition behind this score Then we introduce WordPieces before explaining how we combine the two SLOR assigns to a sentence S a score which consists of its logprobability under a given LM normalized by unigram logprobability and length textSLORS frac1S ln pMS nonumber ln puS Eq 8 where pMS is the probability assigned to the sentence under the LM The unigram probability puS of the sentence is calculated as puS prod t in Spt Eq 9 with pt being the unconditional probability of a token t ie given no context The intuition behind subtracting unigram logprobabilities is that a token which is rare on its own in contrast to being rare at a given position in the sentence should not bring down the sentences rating The normalization by sentence length is necessary in order to not prefer shorter sentences over equally fluent longer ones Consider for instance the following pair of sentences textrm i textrm He is a citizen of Francenonumber
textrm ii textrm He is a citizen of Tuvalunonumber Eq 11 Given that both sentences are of equal length and assuming that France appears more often in a given LM training set than Tuvalu the lengthnormalized logprobability of sentence i under the LM would most likely be higher than that of sentence ii However since both sentences are equally fluent we expect taking each tokens unigram probability into account to lead to a more suitable score for our purposes We calculate the probability of a sentence with a longshort term memory LSTM hochreiter1997long LM ie a special type of RNN LM which has been trained on a large corpus More details on LSTM LMs can be found eg in sundermeyer2012lstm The unigram probabilities for SLOR are estimated using the same corpus Subword units like WordPieces BIBREF9 are getting increasingly important in NLP They constitute a compromise between characters and words On the one hand they yield a smaller vocabulary which reduces model size and training time and improve handling of rare words since those are partitioned into more frequent segments On the other hand they contain more information than characters WordPiece models are estimated using a datadriven approach which maximizes the LM likelihood of the training corpus as described in wu2016google and 6289079 We propose a novel version of SLOR by incorporating a LM which is trained on a corpus which has been split by a WordPiece model This leads to a smaller vocabulary resulting in a LM with less parameters which is faster to train around 12h compared to roughly 5 days for the wordbased version in our experiments We will refer to the wordbased SLOR as WordSLOR and to our newly proposed WordPiecebased version as WPSLOR Now we present our main experiment in which we assess the performances of WordSLOR and WPSLOR as fluency evaluation metrics We experiment on the compression dataset by toutanova2016dataset It contains single sentences and twosentence paragraphs from the Open American National Corpus OANC which belong to 4 genres newswire letters journal and nonfiction Gold references are manually created and the outputs of 4 compression systems ILP extractive NAMAS abstractive SEQ2SEQ extractive and T3 abstractive cf toutanova2016dataset for details for the test data are provided Each example has 3 to 5 independent human ratings for content and fluency We are interested in the latter which is rated on an ordinal scale from 1 disfluent through 3 fluent We experiment on the 2955 system outputs for the test split Average fluency scores per system are shown in Table 2 As can be seen ILP produces the best output In contrast NAMAS is the worst system for fluency In order to be able to judge the reliability of the human annotations we follow the procedure suggested by TACL732 and used by toutanova2016dataset and compute the quadratic weighted kappa BIBREF14 for the human fluency scores of the systemgenerated compressions as 0337 We train our LSTM LMs on the English Gigaword corpus BIBREF15 which consists of news data The hyperparameters of all LMs are tuned using perplexity on a heldout part of Gigaword since we expect LM perplexity and final evaluation performance of WordSLOR and respectively WPSLOR to correlate Our best networks consist of two layers with 512 hidden units each and are trained for 2000000 steps with a minibatch size of 128 For optimization we employ ADAM BIBREF16 Our first baseline is ROUGEL BIBREF1 since it is the most commonly used metric for compression tasks ROUGEL measures the similarity of two sentences based on their longest common subsequence Generated and reference compressions are tokenized and lowercased For multiple references we only make use of the one with the highest score for each example We compare to the best ngramoverlap metrics from toutanova2016dataset combinations of linguistic units bigrams LR2 and trigrams LR3 and scoring measures recall R and Fscore F With multiple references we consider the union of the sets of ngrams Again generated and reference compressions are tokenized and lowercased We further compare to the negative LM crossentropy ie the logprobability which is only normalized by sentence length The score of a sentence S is calculated as textNCES tfrac1S ln pMS Eq 22 with pMS being the probability assigned to the sentence by a LM We employ the same LMs as for SLOR ie LMs trained on words WordNCE and WordPieces WPNCE Our next baseline is perplexity which corresponds to the exponentiated crossentropy textPPLS exp textNCES Eq 24 Due to its popularity we also performed initial experiments with BLEU BIBREF17 Its correlation with human scores was so low that we do not consider it in our final experiments Following earlier work BIBREF2 we evaluate our metrics using Pearson correlation with human judgments It is defined as the covariance divided by the product of the standard deviations rho XY fractextcovXYsigma X sigma Y Eq 28 Pearson cannot accurately judge a metrics performance for sentences of very similar quality ie in the extreme case of rating outputs of identical quality the correlation is either not defined or 0 caused by noise of the evaluation model Thus we additionally evaluate using mean squared error MSE which is defined as the squares of residuals after a linear transformation divided by the sample size textMSEXY undersetfmin frac1Xsum limits i 1Xfxi yi2 Eq 30 with f being a linear function Note that since MSE is invariant to linear transformations of X but not of Y it is a nonsymmetric quasimetric We apply it with Y being the human ratings An additional advantage as compared to Pearson is that it has an interpretable meaning the expected error made by a given metric as compared to the human rating As shown in Table 3 WordSLOR and WPSLOR correlate best with human judgments WordSLOR respectively WPSLOR has a 0025 respectively 0008 higher Pearson correlation than the best wordoverlap metric ROUGELmult even though the latter requires multiple reference compressions Furthermore if we consider with ROUGELsingle a setting with a single given reference the distance to WordSLOR increases to 0048 for Pearson correlation Note that since having a single reference is very common this result is highly relevant for practical applications Considering MSE the top two metrics are still WordSLOR and WPSLOR with a 0008 and respectively 0002 lower error than the third best metric ROUGELmult Comparing WordSLOR and WPSLOR we find no significant differences 0017 for Pearson and 0006 for MSE However WPSLOR uses a more compact LM and hence has a shorter training time since the vocabulary is smaller 16000 vs 128000 tokens Next we find that WordNCE and WPNCE perform roughly on par with wordoverlap metrics This is interesting since they in contrast to traditional metrics do not require reference compressions However their correlation with human fluency judgments is strictly lower than that of their respective SLOR counterparts The difference between WordSLOR and WordNCE is bigger than that between WPSLOR and WPNCE This might be due to accounting for differences in frequencies being more important for words than for WordPieces Both WordPPL and WPPPL clearly underperform as compared to all other metrics in our experiments The traditional wordoverlap metrics all perform similarly ROUGELmult and LR2Fmult are best and worst respectively Results are shown in Table 7 First we can see that using SVR line 1 to combine ROUGELmult and WPSLOR outperforms both individual scores lines 34 by a large margin This serves as a proof of concept the information contained in the two approaches is indeed complementary Next we consider the setting where only references and no annotated examples are available In contrast to SVR line 1 ROUGELM line 2 has only the same requirements as conventional wordoverlap metrics besides a large corpus for training the LM which is easy to obtain for most languages Thus it can be used in the same settings as other wordoverlap metrics Since ROUGELMan uninformed combinationperforms significantly better than both ROUGELmult and WPSLOR on their own it should be the metric of choice for evaluating fluency with given references The results per compression system cf Table 4 look different from the correlations in Table 3 Pearson and MSE are both lower This is due to the outputs of each given system being of comparable quality Therefore the datapoints are similar and thus easier to fit for the linear function used for MSE Pearson in contrast is lower due to its invariance to linear transformations of both variables Note that this effect is smallest for ILP which has uniformly distributed targets textVarY 035 vs textVarY 017 for SEQ2SEQ Comparing the metrics the two SLOR approaches perform best for SEQ2SEQ and T3 In particular they outperform the best wordoverlap metric baseline by 0244 and 0097 Pearson correlation as well as 0012 and 0012 MSE respectively Since T3 is an abstractive system we can conclude that WordSLOR and WPSLOR are applicable even for systems that are not limited to make use of a fixed repertoire of words For ILP and NAMAS wordoverlap metrics obtain best results The differences in performance however are with a maximum difference of 0072 for Pearson and ILP much smaller than for SEQ2SEQ Thus while the differences are significant wordoverlap metrics do not outperform our SLOR approaches by a wide margin Recall additionally that wordoverlap metrics rely on references being available while our proposed approaches do not require this Looking next at the correlations for all models but different domains cf Table 5 we first observe that the results across domains are similar ie we do not observe the same effect as in Subsection Analysis I Fluency Evaluation per Compression System This is due to the distributions of scores being uniform textVarY in 028 036 Next we focus on an important question How much does the performance of our SLORbased metrics depend on the domain given that the respective LMs are trained on Gigaword which consists of news data Comparing the evaluation performance for individual metrics we observe that except for letters WordSLOR and WPSLOR perform best across all domains they outperform the best wordoverlap metric by at least 0019 and at most 0051 Pearson correlation and at least 0004 and at most 0014 MSE The biggest difference in correlation is achieved for the journal domain Thus clearly even LMs which have been trained on outofdomain data obtain competitive performance for fluency evaluation However a domainspecific LM might additionally improve the metrics correlation with human judgments We leave a more detailed analysis of the importance of the training datas domain for future work ROUGE was shown to correlate well with ratings of a generated texts content or meaning at the sentence level BIBREF2 We further expect content and fluency ratings to be correlated In fact sometimes it is difficult to distinguish which one is problematic to illustrate this we show some extreme examplescompressions which got the highest fluency rating and the lowest possible content rating by at least one rater but the lowest fluency score and the highest content score by anotherin Table 6 We thus hypothesize that ROUGE should contain information about fluency which is complementary to SLOR and want to make use of references for fluency evaluation if available In this section we experiment with two referencebased metrics one trainable one and one that can be used without fluency annotations ie in the same settings as pure wordoverlap metrics First we assume a setting in which we have the following available i system outputs whose fluency is to be evaluated ii reference generations for evaluating system outputs iii a small set of system outputs with references which has been annotated for fluency by human raters and iv a large unlabeled corpus for training a LM Note that available fluency annotations are often uncommon in realworld scenarios the reason we use them is that they allow for a proof of concept In this setting we train scikits BIBREF18 support vector regression model SVR with the default parameters on predicting fluency given WPSLOR and ROUGELmult We use 500 of our total 2955 examples for each of training and development and the remaining 1955 for testing Second we simulate a setting in which we have only access to i system outputs which should be evaluated on fluency ii reference compressions and iii large amounts of unlabeled text In particular we assume to not have fluency ratings for system outputs which makes training a regression model impossible Note that this is the standard setting in which wordoverlap metrics are applied Under these conditions we propose to normalize both given scores by mean and variance and to simply add them up We call this new referencebased metric ROUGELM In order to make this second experiment comparable to the SVRbased one we use the same 1955 test examples Fluency evaluation is related to grammatical error detection BIBREF19 BIBREF20 BIBREF21 BIBREF22 and grammatical error correction BIBREF23 BIBREF24 BIBREF25 BIBREF26 BIBREF27 However it differs from those in several aspects most importantly it is concerned with the degree to which errors matter to humans Work on automatic fluency evaluation in NLP has been rare heilman2014predicting predicted the fluency which they called grammaticality of sentences written by English language learners In contrast to ours their approach is supervised stent2005evaluating and cahill2009correlating found only low correlation between automatic metrics and fluency ratings for systemgenerated English paraphrases and the output of a German surface realiser respectively Explicit fluency evaluation of NLG including compression and the related task of summarization has mostly been performed manually vadlapudikatragadda2010SRW used LMs for the evaluation of summarization fluency but their models were based on partofspeech tags which we do not require and they were nonneural Further they evaluated longer texts not single sentences like we do toutanova2016dataset compared 80 wordoverlap metrics for evaluating the content and fluency of compressions finding only low correlation with the latter However they did not propose an alternative evaluation We aim at closing this gap Automatic compression evaluation has mostly had a strong focus on content Hence wordoverlap metrics like ROUGE BIBREF1 have been widely used for compression evaluation However they have certain shortcomings eg they correlate best for extractive compression while we in contrast are interested in an approach which generalizes to abstractive systems Alternatives include success rate BIBREF28 simple accuracy BIBREF29 which is based on the edit distance between the generation and the reference or word accuracy BIBREF30 the equivalent for multiple references In the sense that we promote an explicit evaluation of fluency our work is in line with previous criticism of evaluating NLG tasks with a single score produced by wordoverlap metrics The need for better evaluation for machine translation MT was expressed eg by callison2006re who doubted the meaningfulness of BLEU and claimed that a higher BLEU score was neither a necessary precondition nor a proof of improved translation quality Similarly song2013bleu discussed BLEU being unreliable at the sentence or subsentence level in contrast to the systemlevel or for only one single reference This was supported by isabellecherryfoster2017EMNLP2017 who proposed a socalled challenge set approach as an alternative grahamEtAl2016COLING performed a largescale evaluation of humantargeted metrics for machine translation which can be seen as a compromise between human evaluation and fully automatic metrics They also found fully automatic metrics to correlate only weakly or moderately with human judgments bojar2016ten further confirmed that automatic MT evaluation methods do not perform well with a single reference The need of better metrics for MT has been addressed since 2008 in the WMT metrics shared task BIBREF31 BIBREF32 For unsupervised dialogue generation liuEtAl2016EMNLP20163 obtained close to no correlation with human judgements for BLEU ROUGE and METEOR They contributed this in a large part to the unrestrictedness of dialogue answers which makes it hard to match given references They emphasized that the community should move away from these metrics for dialogue generation tasks and develop metrics that correlate more strongly with human judgments elliottkeller2014P142 reported the same for BLEU and image caption generation duvsek2017referenceless suggested an RNN to evaluate NLG at the utterance level given only the input meaning representation We empirically confirmed the effectiveness of SLOR a LM score which accounts for the effects of sentence length and individual unigram probabilities as a metric for fluency evaluation of the NLG task of automatic compression at the sentence level We further introduced WPSLOR an adaptation of SLOR to WordPieces which reduced both model size and training time at a similar evaluation performance Our experiments showed that our proposed referenceless metrics correlate significantly better with fluency ratings for the outputs of compression systems than traditional wordoverlap metrics on a benchmark dataset Additionally they can be applied even in settings where no references are available or would be costly to obtain Finally for given references we proposed the referencebased metric ROUGELM which consists of a combination of WPSLOR and ROUGE Thus we were able to obtain an even more accurate fluency evaluation We would like to thank Sebastian Ebert and Samuel Bowman for their detailed and helpful feedback </s>
<s>[INST] An empirical study on the effectiveness of images in Multimodal Neural Machine Translation [/INST] In machine translation neural networks have attracted a lot of research attention Recently the attentionbased encoderdecoder framework BIBREF0 BIBREF1 has been largely adopted In this approach Recurrent Neural Networks RNNs map source sequences of words to target sequences The attention mechanism is learned to focus on different parts of the input sentence while decoding Attention mechanisms have shown to work with other modalities too like images where their are able to learn to attend the salient parts of an image for instance when generating text captions BIBREF2 For such applications Convolutional Neural Networks CNNs such as Deep Residual BIBREF3 have shown to work best to represent images Multimodal models of texts and images empower new applications such as visual question answering or multimodal caption translation Also the grounding of multiple modalities against each other may enable the model to have a better understanding of each modality individually such as in natural language understanding applications In the field of Machine Translation MT the efficient integration of multimodal information still remains a challenging task It requires combining diverse modality vector representations with each other These vector representations also called context vectors are computed in order the capture the most relevant information in a modality to output the best translation of a sentence To investigate the effectiveness of information obtained from images a multimodal machine translation shared task BIBREF4 has been addressed to the MT community The best results of NMT model were those of BIBREF5 huang2016attention who used LSTM fed with global visual features or multiple regional visual features followed by rescoring Recently BIBREF6 CalixtoLC17b proposed a doublyattentive decoder that outperformed this baseline with less data and without rescoring Our paper is structured as follows In section SECREF2 we briefly describe our NMT model as well as the conditional GRU activation used in the decoder We also explain how multimodalities can be implemented within this framework In the following sections SECREF3 and SECREF4 we detail three attention mechanisms and explain how we tweak them to work as well as possible with images Finally we report and analyze our results in section SECREF5 then conclude in section SECREF6 In this section we detail the neural machine translation architecture by BIBREF1 BahdanauCB14 implemented as an attentionbased encoderdecoder framework with recurrent neural networks SECREF2 We follow by explaining the conditional GRU layer SECREF8 the gating mechanism we chose for our RNN and how the model can be ported to a multimodal version SECREF13 Given a source sentence INLINEFORM0 the neural network directly models the conditional probability INLINEFORM1 of its translation INLINEFORM2 The network consists of one encoder and one decoder with one attention mechanism The encoder computes a representation INLINEFORM3 for each source sentence and a decoder generates one target word at a time and by decomposing the following conditional probability DISPLAYFORM0 Each source word INLINEFORM0 and target word INLINEFORM1 are a column index of the embedding matrix INLINEFORM2 and INLINEFORM3 The encoder is a bidirectional RNN with Gated Recurrent Unit GRU layers BIBREF7 BIBREF8 where a forward RNN INLINEFORM4 reads the input sequence as it is ordered from INLINEFORM5 to INLINEFORM6 and calculates a sequence of forward hidden states INLINEFORM7 A backward RNN INLINEFORM8 reads the sequence in the reverse order from INLINEFORM9 to INLINEFORM10 resulting in a sequence of backward hidden states INLINEFORM11 We obtain an annotation for each word INLINEFORM12 by concatenating the forward and backward hidden state INLINEFORM13 Each annotation INLINEFORM14 contains the summaries of both the preceding words and the following words The representation INLINEFORM15 for each source sentence is the sequence of annotations INLINEFORM16 The decoder is an RNN that uses a conditional GRU cGRU more details in SECREF8 with an attention mechanism to generate a word INLINEFORM0 at each timestep INLINEFORM1 The cGRU uses its previous hidden state INLINEFORM2 the whole sequence of source annotations INLINEFORM3 and the previously decoded symbol INLINEFORM4 in order to update its hidden state INLINEFORM5 DISPLAYFORM0 In the process the cGRU also computes a timedependent context vector INLINEFORM0 Both INLINEFORM1 and INLINEFORM2 are further used to decode the next symbol We use a deep output layer BIBREF9 to compute a vocabularysized vector DISPLAYFORM0 where INLINEFORM0 INLINEFORM1 INLINEFORM2 INLINEFORM3 are model parameters We can parameterize the probability of decoding each word INLINEFORM4 as DISPLAYFORM0 The initial state of the decoder INLINEFORM0 at timestep INLINEFORM1 is initialized by the following equation DISPLAYFORM0 where INLINEFORM0 is a feedforward network with one hidden layer The conditional GRU consists of two stacked GRU activations called INLINEFORM0 and INLINEFORM1 and an attention mechanism INLINEFORM2 in between called ATT in the footnote paper At each timestep INLINEFORM3 REC1 firstly computes a hidden state proposal INLINEFORM4 based on the previous hidden state INLINEFORM5 and the previously emitted word INLINEFORM6 DISPLAYFORM0 Then the attention mechanism computes INLINEFORM0 over the source sentence using the annotations sequence INLINEFORM1 and the intermediate hidden state proposal INLINEFORM2 DISPLAYFORM0 Finally the second recurrent cell INLINEFORM0 computes the hidden state INLINEFORM1 of the INLINEFORM2 by looking at the intermediate representation INLINEFORM3 and context vector INLINEFORM4 DISPLAYFORM0 Recently BIBREF6 CalixtoLC17b proposed a doubly attentive decoder referred as the MNMT model in the authors paper which can be seen as an expansion of the attentionbased NMT model proposed in the previous section Given a sequence of second a modality annotations INLINEFORM0 we also compute a new context vector based on the same intermediate hidden state proposal INLINEFORM1 DISPLAYFORM0 This new timedependent context vector is an additional input to a modified version of REC2 which now computes the final hidden state INLINEFORM0 using the intermediate hidden state proposal INLINEFORM1 and both timedependent context vectors INLINEFORM2 and INLINEFORM3 DISPLAYFORM0 The probabilities for the next target word from equation EQREF5 also takes into account the new context vector INLINEFORM0 DISPLAYFORM0 where INLINEFORM0 is a new trainable parameter In the field of multimodal NMT the second modality is usually an image computed into feature maps with the help of a CNN The annotations INLINEFORM0 are spatial features ie each annotation represents features for a specific region in the image We follow the same protocol for our experiments and describe it in section SECREF5 We evaluate three models of the image attention mechanism INLINEFORM0 of equation EQREF11 They have in common the fact that at each time step INLINEFORM1 of the decoding phase all approaches first take as input the annotation sequence INLINEFORM2 to derive a timedependent context vector that contain relevant information in the image to help predict the current target word INLINEFORM3 Even though these models differ in how the timedependent context vector is derived they share the same subsequent steps For each mechanism we propose two handpicked illustrations showing where the attention is placed in an image Soft attention has firstly been used for syntactic constituency parsing by BIBREF10 NIPS2015Vinyals but has been widely used for translation tasks ever since One should note that it slightly differs from BIBREF1 BahdanauCB14 where their attention takes as input the previous decoder hidden state instead of the current intermediate one as shown in equation EQREF11 This mechanism has also been successfully investigated for the task of image description generation BIBREF2 where a model generates an images description in natural language It has been used in multimodal translation as well BIBREF6 for which it constitutes a stateoftheart The idea of the soft attentional model is to consider all the annotations when deriving the context vector INLINEFORM0 It consists of a single feedforward network used to compute an expected alignment INLINEFORM1 between modality annotation INLINEFORM2 and the target word to be emitted at the current time step INLINEFORM3 The inputs are the modality annotations and the intermediate representation of REC1 INLINEFORM4 DISPLAYFORM0 The vector INLINEFORM0 has length INLINEFORM1 and its INLINEFORM2 th item contains a score of how much attention should be put on the INLINEFORM3 th annotation in order to output the best word at time INLINEFORM4 We compute normalized scores to create an attention mask INLINEFORM5 over annotations DISPLAYFORM0 Finally the modality timedependent context vector INLINEFORM0 is computed as a weighted sum over the annotation vectors equation In the above expressions INLINEFORM1 INLINEFORM2 and INLINEFORM3 are trained parameters This model is a stochastic and samplingbased process where at every timestep INLINEFORM0 we are making a hard choice to attend only one annotation This corresponds to one spatial location in the image Hard attention has previously been used in the context of object recognition BIBREF11 BIBREF12 and later extended to image description generation BIBREF2 In the context of multimodal NMT we can follow BIBREF2 icml2015xuc15 because both our models involve the same process on images The mechanism INLINEFORM0 is now a function that returns a sampled intermediate latent variables INLINEFORM1 based upon a multinouilli distribution parameterized by INLINEFORM2 DISPLAYFORM0 where INLINEFORM0 an indicator onehot variable which is set to 1 if the INLINEFORM1 th annotation out of INLINEFORM2 is the one used to compute the context vector INLINEFORM3 DISPLAYFORM0 Context vector INLINEFORM0 is now seen as the random variable of this distribution We define the variational lower bound INLINEFORM1 on the marginal log evidence INLINEFORM2 of observing the target sentence INLINEFORM3 given modality annotations INLINEFORM4 DISPLAYFORM0 The learning rules can be derived by taking derivatives of the above variational free energy INLINEFORM0 with respect to the model parameter INLINEFORM1 DISPLAYFORM0 In order to propagate a gradient through this process the summation in equation EQREF26 can then be approximated using Monte Carlo based sampling defined by equation EQREF24 DISPLAYFORM0 To reduce variance of the estimator in equation EQREF27 we use a moving average baseline estimated as an accumulated sum of the previous log likelihoods with exponential decay upon seeing the INLINEFORM0 th minibatch DISPLAYFORM0 In this section we propose a local attentional mechanism that chooses to focus only on a small subset of the image annotations Local Attention has been used for textbased translation BIBREF13 and is inspired by the selective attention model of BIBREF14 gregor15 for image generation Their approach allows the model to select an image patch of varying location and zoom Local attention uses instead the same zoom for all target positions and still achieved good performance This model can be seen as a tradeoff between the soft and hard attentional models The model picks one patch in the annotation sequence one spatial location and selectively focuses on a small window of context around it Even though an image cant be seen as a temporal sequence we still hope that the model finds points of interest and selects the useful information around it This approach has an advantage of being differentiable whereas the stochastic attention requires more complicated techniques such as variance reduction and reinforcement learning to train as shown in section SECREF22 The soft attention has the drawback to attend the whole image which can be difficult to learn especially because the number of annotations INLINEFORM0 is usually large presumably to keep a significant spatial granularity More formally at every decoding step INLINEFORM0 the model first generates an aligned position INLINEFORM1 Context vector INLINEFORM2 is derived as a weighted sum over the annotations within the window INLINEFORM3 where INLINEFORM4 is a fixed model parameter chosen empirically These selected annotations correspond to a squared region in the attention maps around INLINEFORM7 The attention mask INLINEFORM8 is of size INLINEFORM9 The model predicts INLINEFORM10 as an aligned position in the annotation sequence referred as Predictive alignment localm in the authors paper according to the following equation DISPLAYFORM0 where INLINEFORM0 and INLINEFORM1 are both trainable model parameters and INLINEFORM2 is the annotation sequence length INLINEFORM3 Because of the sigmoid INLINEFORM4 We use equation EQREF18 and EQREF19 respectively to compute the expected alignment vector INLINEFORM5 and the attention mask INLINEFORM6 In addition a Gaussian distribution centered around INLINEFORM7 is placed on the alphas in order to favor annotations near INLINEFORM8 DISPLAYFORM0 where standard deviation INLINEFORM0 We obtain context vector INLINEFORM1 by following equation Three optimizations can be added to the attention mechanism regarding the image modality All lead to a better use of the image by the model and improved the translation scores overall At every decoding step INLINEFORM0 we compute a gating scalar INLINEFORM1 according to the previous decoder state INLINEFORM2 DISPLAYFORM0 It is then used to compute the timedependent image context vector DISPLAYFORM0 BIBREF2 icml2015xuc15 empirically found it to put more emphasis on the objects in the image descriptions generated with their model We also double the output size of trainable parameters INLINEFORM0 INLINEFORM1 and INLINEFORM2 in equation EQREF18 when it comes to compute the expected annotations over the image annotation sequence More formally given the image annotation sequence INLINEFORM3 the tree matrices are of size INLINEFORM4 INLINEFORM5 and INLINEFORM6 respectively We noticed a better coverage of the objects in the image by the alpha weights Lastly we use a grounding attention inspired by BIBREF15 delbrouck2017multimodal The mechanism merge each spatial location INLINEFORM0 in the annotation sequence INLINEFORM1 with the initial decoder state INLINEFORM2 obtained in equation EQREF7 with nonlinearity DISPLAYFORM0 where INLINEFORM0 is INLINEFORM1 function The new annotations go through a L2 normalization layer followed by two INLINEFORM2 convolutional layers of size INLINEFORM3 respectively to obtain INLINEFORM4 weights one for each spatial location We normalize the weights with a softmax to obtain a soft attention map INLINEFORM5 Each annotation INLINEFORM6 is then weighted according to its corresponding INLINEFORM7 DISPLAYFORM0 This method can be seen as the removal of unnecessary information in the image annotations according to the source sentence This attention is used on top of the others before decoding and is referred as grounded image in Table TABREF41 For this experiments on Multimodal Machine Translation we used the Multi30K dataset BIBREF17 which is an extended version of the Flickr30K Entities For each image one of the English descriptions was selected and manually translated into German by a professional translator As training and development data 29000 and 1014 triples are used respectively A test set of size 1000 is used for metrics evaluation All our models are build on top of the nematus framework BIBREF18 The encoder is a bidirectional RNN with GRU one 1024D singlelayer forward and one 1024D singlelayer backward RNN Word embeddings for source and target language are of 620D and trained jointly with the model Word embeddings and other nonrecurrent matrices are initialized by sampling from a Gaussian INLINEFORM0 recurrent matrices are random orthogonal and bias vectors are all initialized to zero To create the image annotations used by our decoder we used a ResNet50 pretrained on ImageNet and extracted the features of size INLINEFORM0 at its res4f layer BIBREF3 In our experiments our decoder operates on the flattened 196 INLINEFORM1 1024 ie INLINEFORM2 We also apply dropout with a probability of 05 on the embeddings on the hidden states in the bidirectional RNN in the encoder as well as in the decoder In the decoder we also apply dropout on the text annotations INLINEFORM3 the image features INLINEFORM4 on both modality context vector and on all components of the deep output layer before the readout operation We apply dropout using one same mask in all time steps BIBREF19 We also normalize and tokenize English and German descriptions using the Moses tokenizer scripts BIBREF20 We use the byte pair encoding algorithm on the train set to convert spaceseparated tokens into subwords BIBREF21 reducing our vocabulary size to 9226 and 14957 words for English and German respectively All variants of our attention model were trained with ADADELTA BIBREF22 with minibatches of size 80 for our monomodal textonly NMT model and 40 for our multimodal NMT We apply early stopping for model selection based on BLEU4 training is halted if no improvement on the development set is observed for more than 20 epochs We use the metrics BLEU4 BIBREF23 METEOR BIBREF24 and TER BIBREF25 to evaluate the quality of our models translations We notice a nice overall progress over BIBREF6 CalixtoLC17b multimodal baseline especially when using the stochastic attention With improvements of 151 BLEU and 22 TER on both precisionoriented metrics the model shows a strong similarity of the ngrams of our candidate translations with respect to the references The more recalloriented metrics METEOR scores are roughly the same across our models which is expected because all attention mechanisms share the same subsequent step at every timestep INLINEFORM0 ie taking into account the attention weights of previous timestep INLINEFORM1 in order to compute the new intermediate hidden state proposal and therefore the new context vector INLINEFORM2 Again the largest improvement is given by the hard stochastic attention mechanism 04 METEOR because it is modeled as a decision process according to the previous choices this may reinforce the idea of recall We also remark interesting improvements when using the grounded mechanism especially for the soft attention The soft attention may benefit more of the grounded image because of the wide range of spatial locations it looks at especially compared to the stochastic attention This motivates us to dig into more complex grounding techniques in order to give the machine a deeper understanding of the modalities Note that even though our baseline NMT model is basically the same as BIBREF6 CalixtoLC17b our experiments results are slightly better This is probably due to the different use of dropout and subwords We also compared our results to BIBREF16 caglayan2016does because our multimodal models are nearly identical with the major exception of the gating scalar cfr section SECREF4 This motivated some of our qualitative analysis and hesitation towards the current architecture in the next section For spacesaving and ergonomic reasons we only discuss about the hard stochastic and soft attention the latter being a generalization of the local attention As we can see in Figure FIGREF44 the soft attention model is looking roughly at the same region of the image for every decoding step INLINEFORM0 Because the words hunddog waldforest or wegway in left image are objects they benefit from a high gating scalar As a matter of fact the attention mechanism has learned to detect the objects within a scene at every timestep whichever word we are decoding as shown in the right image and the gating scalar has learned to decide whether or not we have to look at the picture or more accurately whether or not we are translating an object Without this scalar the translation scores undergo a massive drop as seen in BIBREF16 caglayan2016does which means that the attention mechanisms dont really understand the more complex relationships between objects what is really happening in the scene Surprisingly the gating scalar happens to be really low in the stochastic attention mechanism a significant amount of sentences dont have a summed gating scalar INLINEFORM1 010 The model totally discards the image in the translation process It is also worth to mention that we use a ResNet trained on 128 million images for a classification tasks The features used by the attention mechanism are strongly objectoriented and the machine could miss important information for a multimodal translation task We believe that the robust architecture of both encoders INLINEFORM0 combined with a GRU layer and wordembeddings took care of the right translation for relationships between objects and timedependencies Yet we noticed a common misbehavior for all our multimodal models if the attention loose track of the objects in the picture and gets lost the model still takes it into account and somehow overrides the information brought by the textbased annotations The translation is then totally mislead We illustrate with an example The monomodal translation has a sentencelevel BLEU of 8216 whilst the soft attention and hard stochastic attention scores are of 1682 and 3445 respectively Figure FIGREF47 shows the attention maps for both mechanism Nevertheless one has to concede that the use of images indubitably helps the translation as shown in the score tabular We have tried different attention mechanism and tweaks for the image modality We showed improvements and encouraging results overall on the Flickr30K Entities dataset Even though we identified some flaws of the current attention mechanisms we can conclude pretty safely that images are an helpful resource for the machine in a translation task We are looking forward to try out richer and more suitable features for multimodal translation ie dense captioning features Another interesting approach would be to use visually grounded word embeddings to capture visual notions of semantic relatedness This work was partly supported by the ChistEra project IGLU with contribution from the Belgian Fonds de la Recherche Scientique FNRS contract no R501115F and by the FSO project VCYCLE with contribution from the Belgian Waloon Region contract no 1510501 </s>
<s>[INST] Unsupervised Machine Commenting with Neural Variational Topic Model [/INST] Making article comments is a fundamental ability for an intelligent machine to understand the article and interact with humans It provides more challenges because commenting requires the abilities of comprehending the article summarizing the main ideas mining the opinions and generating the natural language Therefore machine commenting is an important problem faced in building an intelligent and interactive agent Machine commenting is also useful in improving the activeness of communities including online forums and news websites Article comments can provide extended information and external opinions for the readers to have a more comprehensive understanding of the article Therefore an article with more informative and interesting comments will attract more attention from readers Moreover machine commenting can kick off the discussion about an article or a topic which helps increase user engagement and interaction between the readers and authors Because of the advantage and importance described above more recent studies have focused on building a machine commenting system with neural models BIBREF0 One bottleneck of neural machine commenting models is the requirement of a large parallel dataset However the naturally paired commenting dataset is loosely paired Qin et al QinEA2018 were the first to propose the article commenting task and an articlecomment dataset The dataset is crawled from a news website and they sample 1610 articlecomment pairs to annotate the relevance score between articles and comments The relevance score ranges from 1 to 5 and we find that only 68 of the pairs have an average score greater than 4 It indicates that the naturally paired articlecomment dataset contains a lot of loose pairs which is a potential harm to the supervised models Besides most articles and comments are unpaired on the Internet For example a lot of articles do not have the corresponding comments on the news websites and the comments regarding the news are more likely to appear on social media like Twitter Since comments on social media are more various and recent it is important to exploit these unpaired data Another issue is that there is a semantic gap between articles and comments In machine translation and text summarization the target output mainly shares the same points with the source input However in article commenting the comment does not always tell the same thing as the corresponding article Table TABREF1 shows an example of an article and several corresponding comments The comments do not directly tell what happened in the news but talk about the underlying topics eg NBA Christmas Day games LeBron James However existing methods for machine commenting do not model the topics of articles which is a potential harm to the generated comments To this end we propose an unsupervised neural topic model to address both problems For the first problem we completely remove the need of parallel data and propose a novel unsupervised approach to train a machine commenting system relying on nothing but unpaired articles and comments For the second issue we bridge the articles and comments with their topics Our model is based on a retrievalbased commenting framework which uses the news as the query to retrieve the comments by the similarity of their topics The topic is represented with a variational topic which is trained in an unsupervised manner The contributions of this work are as follows In this section we highlight the research challenges of machine commenting and provide some solutions to deal with these challenges Here we first introduce the challenges of building a wellperformed machine commenting system The generative model such as the popular sequencetosequence model is a direct choice for supervised machine commenting One can use the title or the content of the article as the encoder input and the comments as the decoder output However we find that the mode collapse problem is severed with the sequencetosequence model Despite the input articles being various the outputs of the model are very similar The reason mainly comes from the contradiction between the complex pattern of generating comments and the limited parallel data In other natural language generation tasks such as machine translation and text summarization the target output of these tasks is strongly related to the input and most of the required information is involved in the input text However the comments are often weakly related to the input articles and part of the information in the comments is external Therefore it requires much more paired data for the supervised model to alleviate the mode collapse problem One article can have multiple correct comments and these comments can be very semantically different from each other However in the training set there is only a part of the correct comments so the other correct comments will be falsely regarded as the negative samples by the supervised model Therefore many interesting and informative comments will be discouraged or neglected because they are not paired with the articles in the training set There is a semantic gap between articles and comments In machine translation and text summarization the target output mainly shares the same points with the source input However in article commenting the comments often have some external information or even tell an opposite opinion from the articles Therefore it is difficult to automatically mine the relationship between articles and comments Facing the above challenges we provide three solutions to the problems Given a large set of candidate comments the retrieval model can select some comments by matching articles with comments Compared with the generative model the retrieval model can achieve more promising performance First the retrieval model is less likely to suffer from the mode collapse problem Second the generated comments are more predictable and controllable by changing the candidate set Third the retrieval model can be combined with the generative model to produce new comments by adding the outputs of generative models to the candidate set The unsupervised learning method is also important for machine commenting to alleviate the problems descried above Unsupervised learning allows the model to exploit more data which helps the model to learn more complex patterns of commenting and improves the generalization of the model Many comments provide some unique opinions but they do not have paired articles For example many interesting comments on social media eg Twitter are about recent news but require redundant work to match these comments with the corresponding news articles With the help of the unsupervised learning method the model can also learn to generate these interesting comments Additionally the unsupervised learning method does not require negative samples in the training stage so that it can alleviate the negative sampling bias Although there is semantic gap between the articles and the comments we find that most articles and comments share the same topics Therefore it is possible to bridge the semantic gap by modeling the topics of both articles and comments It is also similar to how humans generate comments Humans do not need to go through the whole article but are capable of making a comment after capturing the general topics We now introduce our proposed approach as an implementation of the solutions above We first give the definition and the denotation of the problem Then we introduce the retrievalbased commenting framework After that a neural variational topic model is introduced to model the topics of the comments and the articles Finally semisupervised training is used to combine the advantage of both supervised and unsupervised learning Given an article the retrievalbased method aims to retrieve a comment from a large pool of candidate comments The article consists of a title INLINEFORM0 and a body INLINEFORM1 The comment pool is formed from a large scale of candidate comments INLINEFORM2 where INLINEFORM3 is the number of the unique comments in the pool In this work we have 45 million human comments in the candidate set and the comments are various covering different topics from pets to sports The retrievalbased model should score the matching between the upcoming article and each comments and return the comments which is matched with the articles the most Therefore there are two main challenges in retrievalbased commenting One is how to evaluate the matching of the articles and comments The other is how to efficiently compute the matching scores because the number of comments in the pool is large To address both problems we select the dotproduct operation to compute matching scores More specifically the model first computes the representations of the article INLINEFORM0 and the comments INLINEFORM1 Then the score between article INLINEFORM2 and comment INLINEFORM3 is computed with the dotproduct operation DISPLAYFORM0 The dotproduct scoring method has proven a successful in a matching model BIBREF1 The problem of finding datapoints with the largest dotproduct values is called Maximum Inner Product Search MIPS and there are lots of solutions to improve the efficiency of solving this problem Therefore even when the number of candidate comments is very large the model can still find comments with the highest efficiency However the study of the MIPS is out of the discussion in this work We refer the readers to relevant articles for more details about the MIPS BIBREF2 BIBREF3 BIBREF4 BIBREF5 Another advantage of the dotproduct scoring method is that it does not require any extra parameters so it is more suitable as a part of the unsupervised model We obtain the representations of articles INLINEFORM0 and comments INLINEFORM1 with a neural variational topic model The neural variational topic model is based on the variational autoencoder framework so it can be trained in an unsupervised manner The model encodes the source text into a representation from which it reconstructs the text We concatenate the title and the body to represent the article In our model the representations of the article and the comment are obtained in the same way For simplicity we denote both the article and the comment as document Since the articles are often very long more than 200 words we represent the documents into bagofwords for saving both the time and memory cost We denote the bagofwords representation as INLINEFORM0 where INLINEFORM1 is the onehot representation of the word at INLINEFORM2 position and INLINEFORM3 is the number of words in the vocabulary The encoder INLINEFORM4 compresses the bagofwords representations INLINEFORM5 into topic representations INLINEFORM6 DISPLAYFORM0 DISPLAYFORM1 where INLINEFORM0 INLINEFORM1 INLINEFORM2 and INLINEFORM3 are the trainable parameters Then the decoder INLINEFORM4 reconstructs the documents by independently generating each words in the bagofwords DISPLAYFORM0 DISPLAYFORM1 where INLINEFORM0 is the number of words in the bagofwords and INLINEFORM1 is a trainable matrix to map the topic representation into the word distribution In order to model the topic information we use a Dirichlet prior rather than the standard Gaussian prior However it is difficult to develop an effective reparameterization function for the Dirichlet prior to train VAE Therefore following BIBREF6 we use the Laplace approximation BIBREF7 to Dirichlet prior INLINEFORM0 DISPLAYFORM0 DISPLAYFORM1 where INLINEFORM0 denotes the logistic normal distribution INLINEFORM1 is the number of topics and INLINEFORM2 is a parameter vector Then the variational lower bound is written as DISPLAYFORM0 where the first term is the KLdivergence loss and the second term is the reconstruction loss The mean INLINEFORM0 and the variance INLINEFORM1 are computed as follows DISPLAYFORM0 DISPLAYFORM1 We use the INLINEFORM0 and INLINEFORM1 to generate the samples INLINEFORM2 by sampling INLINEFORM3 from which we reconstruct the input INLINEFORM4 At the training stage we train the neural variational topic model with the Eq EQREF22 At the testing stage we use INLINEFORM0 to compute the topic representations of the article INLINEFORM1 and the comment INLINEFORM2 In addition to the unsupervised training we explore a semisupervised training framework to combine the proposed unsupervised model and the supervised model In this scenario we have a paired dataset that contains articlecomment parallel contents INLINEFORM0 and an unpaired dataset that contains the documents articles or comments INLINEFORM1 The supervised model is trained on INLINEFORM2 so that we can learn the matching or mapping between articles and comments By sharing the encoder of the supervised model and the unsupervised model we can jointly train both the models with a joint objective function DISPLAYFORM0 where INLINEFORM0 is the loss function of the unsupervised learning Eq refloss INLINEFORM1 is the loss function of the supervised learning eg the crossentropy loss of Seq2Seq model and INLINEFORM2 is a hyperparameter to balance two parts of the loss function Hence the model is trained on both unpaired data INLINEFORM3 and paired data INLINEFORM4 We select a largescale Chinese dataset BIBREF0 with millions of real comments and a humanannotated test set to evaluate our model The dataset is collected from Tencent News which is one of the most popular Chinese websites for news and opinion articles The dataset consists of 198112 news articles Each piece of news contains a title the content of the article and a list of the users comments Following the previous work BIBREF0 we tokenize all text with the popular python package Jieba and filter out short articles with less than 30 words in content and those with less than 20 comments The dataset is split into trainingvalidationtest sets and they contain 19150250001610 pieces of news respectively The whole dataset has a vocabulary size of 1858452 The average lengths of the article titles and content are 15 and 554 Chinese words The average comment length is 17 words The hidden size of the model is 512 and the batch size is 64 The number of topics INLINEFORM0 is 100 The weight INLINEFORM1 in Eq EQREF26 is 10 under the semisupervised setting We prune the vocabulary and only leave 30000 most frequent words in the vocabulary We train the model for 20 epochs with the Adam optimizing algorithms BIBREF8 In order to alleviate the KL vanishing problem we set the initial learning to INLINEFORM2 and use batch normalization BIBREF9 in each layer We also gradually increase the KL term from 0 to 1 after each epoch We compare our model with several unsupervised models and supervised models Unsupervised baseline models are as follows TFIDF Lexical NonNeural is an important unsupervised baseline We use the concatenation of the title and the body as the query to retrieve the candidate comment set by means of the similarity of the tfidf value The model is trained on unpaired articles and comments which is the same as our proposed model LDA Topic NonNeural is a popular unsupervised topic model which discovers the abstract topics that occur in a collection of documents We train the LDA with the articles and comments in the training set The model retrieves the comments by the similarity of the topic representations NVDM Lexical Neural is a VAEbased approach for document modeling BIBREF10 We compare our model with this baseline to demonstrate the effect of modeling topic The supervised baseline models are S2S Generative BIBREF11 is a supervised generative model based on the sequencetosequence network with the attention mechanism BIBREF12 The model uses the titles and the bodies of the articles as the encoder input and generates the comments with the decoder IR Retrieval BIBREF0 is a supervised retrievalbased model which trains a convolutional neural network CNN to take the articles and a comment as inputs and output the relevance score The positive instances for training are the pairs in the training set and the negative instances are randomly sampled using the negative sampling technique BIBREF13 For text generation automatically evaluate the quality of the generated text is an open problem In particular the comment of a piece of news can be various so it is intractable to find out all the possible references to be compared with the model outputs Inspired by the evaluation methods of dialogue models we formulate the evaluation as a ranking problem Given a piece of news and a set of candidate comments the comment model should return the rank of the candidate comments The candidate comment set consists of the following parts Correct The groundtruth comments of the corresponding news provided by the human Plausible The 50 most similar comments to the news We use the news as the query to retrieve the comments that appear in the training set based on the cosine similarity of their tfidf values We select the top 50 comments that are not the correct comments as the plausible comments Popular The 50 most popular comments from the dataset We count the frequency of each comments in the training set and select the 50 most frequent comments to form the popular comment set The popular comments are the general and meaningless comments such as Yes Great Thats right and Make Sense These comments are dull and do not carry any information so they are regarded as incorrect comments Random After selecting the correct plausible and popular comments we fill the candidate set with randomly selected comments from the training set so that there are 200 unique comments in the candidate set Following previous work we measure the rank in terms of the following metrics Recallk The proportion of human comments found in the topk recommendations Mean Rank MR The mean rank of the human comments Mean Reciprocal Rank MRR The mean reciprocal rank of the human comments The evaluation protocol is compatible with both retrieval models and generative models The retrieval model can directly rank the comments by assigning a score for each comment while the generative model can rank the candidates by the models loglikelihood score Table TABREF31 shows the performance of our models and the baselines in retrieval evaluation We first compare our proposed model with other popular unsupervised methods including TFIDF LDA and NVDM TFIDF retrieves the comments by similarity of words rather than the semantic meaning so it achieves low scores on all the retrieval metrics The neural variational document model is based on the neural VAE framework It can capture the semantic information so it has better performance than the TFIDF model LDA models the topic information and captures the deeper relationship between the article and comments so it achieves improvement in all relevance metrics Finally our proposed model outperforms all these unsupervised methods mainly because the proposed model learns both the semantics and the topic information We also evaluate two popular supervised models ie seq2seq and IR Since the articles are very long we find either RNNbased or CNNbased encoders cannot hold all the words in the articles so it requires limiting the length of the input articles Therefore we use an MLPbased encoder which is the same as our model to encode the full length of articles In our preliminary experiments the MLPbased encoder with full length articles achieves better scores than the RNNCNNbased encoder with limited length articles It shows that the seq2seq model gets low scores on all relevant metrics mainly because of the mode collapse problem as described in Section Challenges Unlike seq2seq IR is based on a retrieval framework so it achieves much better performance Following previous work BIBREF0 we evaluate the models under the generative evaluation setting The retrievalbased models generate the comments by selecting a comment from the candidate set The candidate set contains the comments in the training set Unlike the retrieval evaluation the reference comments may not appear in the candidate set which is closer to realworld settings Generativebased models directly generate comments without a candidate set We compare the generated comments of either the retrievalbased models or the generative models with the five reference comments We select four popular metrics in text generation to compare the model outputs with the references BLEU BIBREF14 METEOR BIBREF15 ROUGE BIBREF16 CIDEr BIBREF17 Table TABREF32 shows the performance for our models and the baselines in generative evaluation Similar to the retrieval evaluation our proposed model outperforms the other unsupervised methods which are TFIDF NVDM and LDA in generative evaluation Still the supervised IR achieves better scores than the seq2seq model With the help of our proposed model both IR and S2S achieve an improvement under the semisupervised scenarios We analyze the performance of the proposed method under the semisupervised setting We train the supervised IR model with different numbers of paired data Figure FIGREF39 shows the curve blue of the recall1 score As expected the performance grows as the paired dataset becomes larger We further combine the supervised IR with our unsupervised model which is trained with full unpaired data 48M and different number of paired data from 50K to 48M It shows that IRProposed can outperform the supervised IR model given the same paired dataset It concludes that the proposed model can exploit the unpaired data to further improve the performance of the supervised model Although our proposed model can achieve better performance than previous models there are still remaining two questions why our model can outperform them and how to further improve the performance To address these queries we perform error analysis to analyze the error types of our model and the baseline models We select TFIDF S2S and IR as the representative baseline models We provide 200 unique comments as the candidate sets which consists of four types of comments as described in the above retrieval evaluation setting Correct Plausible Popular and Random We rank the candidate comment set with four models TFIDF S2S IR and ProposedIR and record the types of top1 comments Figure FIGREF40 shows the percentage of different types of top1 comments generated by each model It shows that TFIDF prefers to rank the plausible comments as the top1 comments mainly because it matches articles with the comments based on the similarity of the lexicon Therefore the plausible comments which are more similar in the lexicon are more likely to achieve higher scores than the correct comments It also shows that the S2S model is more likely to rank popular comments as the top1 comments The reason is the S2S model suffers from the mode collapse problem and data sparsity so it prefers short and general comments like Great or Thats right which appear frequently in the training set The correct comments often contain new information and different language models from the training set so they do not obtain a high score from S2S IR achieves better performance than TFIDF and S2S However it still suffers from the discrimination between the plausible comments and correct comments This is mainly because IR does not explicitly model the underlying topics Therefore the correct comments which are more relevant in topic with the articles get lower scores than the plausible comments which are more literally relevant with the articles With the help of our proposed model proposedIR achieves the best performance and achieves a better accuracy to discriminate the plausible comments and the correct comments Our proposed model incorporates the topic information so the correct comments which are more similar to the articles in topic obtain higher scores than the other types of comments According to the analysis of the error types of our model we still need to focus on avoiding predicting the plausible comments There are few studies regarding machine commenting Qin et al QinEA2018 is the first to propose the article commenting task and a dataset which is used to evaluate our model in this work More studies about the comments aim to automatically evaluate the quality of the comments Park et al ParkSDE16 propose a system called CommentIQ which assist the comment moderators in identifying high quality comments Napoles et al NapolesTPRP17 propose to discriminating engaging respectful and informative conversations They present a Yahoo news comment threads dataset and annotation scheme for the new task of identifying good online conversations More recently Kolhaatkar and Taboada KolhatkarT17 propose a model to classify the comments into constructive comments and nonconstructive comments In this work we are also inspired by the recent related work of natural language generation models BIBREF18 BIBREF19 Topic models BIBREF20 are among the most widely used models for learning unsupervised representations of text One of the most popular approaches for modeling the topics of the documents is the Latent Dirichlet Allocation BIBREF21 which assumes a discrete mixture distribution over topics is sampled from a Dirichlet prior shared by all documents In order to explore the space of different modeling assumptions some blackbox inference methods BIBREF22 BIBREF23 are proposed and applied to the topic models Kingma and Welling vae propose the Variational AutoEncoder VAE where the generative model and the variational posterior are based on neural networks VAE has recently been applied to modeling the representation and the topic of the documents Miao et al NVDM model the representation of the document with a VAEbased approach called the Neural Variational Document Model NVDM However the representation of NVDM is a vector generated from a Gaussian distribution so it is not very interpretable unlike the multinomial mixture in the standard LDA model To address this issue Srivastava and Sutton nvlda propose the NVLDA model that replaces the Gaussian prior with the Logistic Normal distribution to approximate the Dirichlet prior and bring the document vector into the multinomial space More recently Nallapati et al sengen present a variational autoencoder approach which models the posterior over the topic assignments to sentences using an RNN We explore a novel way to train a machine commenting model in an unsupervised manner According to the properties of the task we propose using the topics to bridge the semantic gap between articles and comments We introduce a variation topic model to represent the topics and match the articles and comments by the similarity of their topics Experiments show that our topicbased approach significantly outperforms previous lexiconbased models The model can also profit from paired corpora and achieves stateoftheart performance under semisupervised scenarios </s>
<s>[INST] Enriching BERT with Knowledge Graph Embeddings for Document Classification [/INST] With everincreasing amounts of data available there is an increase in the need to offer tooling to speed up processing and eventually making sense of this data Because fullyautomated tools to extract meaning from any given input to any desired level of detail have yet to be developed this task is still at least supervised and often partially resolved by humans we refer to these humans as knowledge workers Knowledge workers are professionals that have to go through large amounts of data and consolidate prepare and process it on a daily basis This data can originate from highly diverse portals and resources and depending on type or category the data needs to be channelled through specific downstream processing pipelines We aim to create a platform for curation technologies that can deal with such data from diverse sources and that provides natural language processing NLP pipelines tailored to particular content types and genres rendering this initial classification an important subtask In this paper we work with the dataset of the 2019 GermEval shared task on hierarchical text classification BIBREF0 and use the predefined set of labels to evaluate our approach to this classification task Deep neural language models have recently evolved to a successful method for representing text In particular Bidirectional Encoder Representations from Transformers BERT BIBREF1 outperformed previous stateoftheart methods by a large margin on various NLP tasks We adopt BERT for textbased classification and extend the model with additional metadata provided in the context of the shared task such as author publisher publishing date etc A key contribution of this paper is the inclusion of additional meta data using a stateoftheart approach for text processing Being a transfer learning approach it facilitates the task solution with external knowledge for a setup in which relatively little training data is available More precisely we enrich BERT as our pretrained text representation model with knowledge graph embeddings that are based on Wikidata BIBREF2 add metadata provided by the shared task organisers title authors publishing date etc and collect additional information on authors for this particular document classification task As we do not rely on textbased features alone but also utilize document metadata we consider this as a document classification problem The proposed approach is an attempt to solve this problem exemplary for single dataset provided by the organisers of the shared task A central challenge in work on genre classification is the definition of a both rigid for theoretical purposes and flexible for practical purposes mode of representation that is able to model various dimensions and characteristics of arbitrary text genres The size of the challenge can be illustrated by the observation that there is no clear agreement among researchers regarding actual genre labels or their scope and consistency There is a substantial amount of previous work on the definition of genre taxonomies genre ontologies or sets of labels BIBREF3 BIBREF4 BIBREF5 BIBREF6 BIBREF7 Since we work with the dataset provided by the organisers of the 2019 GermEval shared task we adopt their hierarchy of labels as our genre palette In the following we focus on related work more relevant to our contribution With regard to text and document classification BERT Bidirectional Encoder Representations from Transformers BIBREF1 is a pretrained embedding model that yields state of the art results in a wide span of NLP tasks such as question answering textual entailment and natural language inference learning BIBREF8 BIBREF9 are among the first to apply BERT to document classification Acknowledging challenges like incorporating syntactic information or predicting multiple labels they describe how they adapt BERT for the document classification task In general they introduce a fullyconnected layer over the final hidden state that contains one neuron each representing an input token and further optimize the model choosing softmax classifier parameters to weight the hidden state layer They report state of the art results in experiments based on four popular datasets An approach exploiting Hierarchical Attention Networks is presented by BIBREF10 Their model introduces a hierarchical structure to represent the hierarchical nature of a document BIBREF10 derive attention on the word and sentence level which makes the attention mechanisms react flexibly to long and short distant context information during the building of the document representations They test their approach on six large scale text classification problems and outperform previous methods substantially by increasing accuracy by about 3 to 4 percentage points BIBREF11 the organisers of the GermEval 2019 shared task on hierarchical text classification use shallow capsule networks reporting that these work well on structured data for example in the field of visual inference and outperform CNNs LSTMs and SVMs in this area They use the Web of Science WOS dataset and introduce a new realworld scenario dataset called Blurb Genre Collection BGC With regard to external resources to enrich the classification task BIBREF12 experiment with external knowledge graphs to enrich embedding information in order to ultimately improve language understanding They use structural knowledge represented by Wikidata entities and their relation to each other A mix of largescale textual corpora and knowledge graphs is used to further train language representation exploiting ERNIE BIBREF13 considering lexical syntactic and structural information BIBREF14 propose and evaluate an approach to improve text classification with knowledge from Wikipedia Based on a bag of words approach they derive a thesaurus of concepts from Wikipedia and use it for document expansion The resulting document representation improves the performance of an SVM classifier for predicting text categories Our experiments are modelled on the GermEval 2019 shared task and deal with the classification of books The dataset contains 20784 German books Each record has A title A list of authors The average number of authors per book is 113 with most books 14970 having a single author and one outlier with 28 authors A short descriptive text blurb with an average length of 95 words A URL pointing to a page on the publishers website An ISBN number The date of publication The books are labeled according to the hierarchy used by the German publisher Random House This taxonomy includes a mix of genre and topical categories It has eight toplevel genre categories 93 on the second level and 242 on the most detailed third level The eight toplevel labels are Ganzheitliches Bewusstsein holistic awarenessconsciousness Knste arts Sachbuch nonfiction Kinderbuch Jugendbuch children and young adults Ratgeber counseloradvisor Literatur Unterhaltung literature and entertainment Glaube Ethik faith and ethics Architektur Garten architecture and garden We refer to the shared task description for details on the lower levels of the ontology Note that we do not have access to any of the full texts Hence we use the blurbs as input for BERT Given the relatively short average length of the blurbs this considerably decreases the amount of data points available for a single book The shared task is divided into two subtask Subtask A is to classify a book using the information provided as explained above according to the toplevel of the taxonomy selecting one or more of the eight labels Subtask B is to classify a book according to the detailed taxonomy specifying labels on the second and third level of the taxonomy as well in total 343 labels This renders both subtasks a multilabel classification task As indicated in Section SECREF1 we base our experiments on BERT in order to explore if it can be successfully adopted to the task of book or document classification We use the pretrained models and enrich them with additional metadata and tune the models for both classification subtasks In addition to the metadata provided by the organisers of the shared task see Section SECREF3 we add the following features Number of authors Academic title Dr or Prof if found in author names 0 or 1 Number of words in title Number of words in blurb Length of longest word in blurb Mean word length in blurb Median word length in blurb Age in years after publication date Probability of first author being male or female based on the GenderbyName dataset Available for 87 of books in training set see Table TABREF21 The statistics length average etc regarding blurbs and titles are added in an attempt to make certain characteristics explicit to the classifier For example books labeled Kinderbuch Jugendbuch children and young adults have a title that is on average 547 words long whereas books labeled Knste arts on average have shorter titles of 346 words The binary feature for academic title is based on the assumption that academics are more likely to write nonfiction The gender feature is included to explore and potentially exploit whether or not there is a genderbias for particular genres Whereas one should not judge a book by its cover we argue that additional information on the author can support the classification task Authors often adhere to their specific style of writing and are likely to specialize in a specific genre To be precise we want to include author identity information which can be retrieved by selecting particular properties from for example the Wikidata knowledge graph such as date of birth nationality or other biographical features A drawback of this approach however is that one has to manually select and filter those properties that improve classification performance This is why instead we follow a more generic approach and utilize automatically generated graph embeddings as author representations Graph embedding methods create dense vector representations for each node such that distances between these vectors predict the occurrence of edges in the graph The node distance can be interpreted as topical similarity between the corresponding authors We rely on pretrained embeddings based on PyTorch BigGraph BIBREF15 The graph model is trained on the full Wikidata graph using a translation operator to represent relations Figure FIGREF23 visualizes the locality of the author embeddings To derive the author embeddings we look up Wikipedia articles that match with the author names and map the articles to the corresponding Wikidata items If a book has multiple authors the embedding of the first author for which an embedding is available is used Following this method we are able to retrieve embeddings for 72 of the books in the training and test set see Table TABREF21 Although the pretrained BERT language models are multilingual and therefore support German we rely on a BERT model that was exclusively pretrained on German text as published by the German company Deepset AI This model was trained from scratch on the German Wikipedia news articles and court decisions Deepset AI reports better performance for the German BERT models compared to the multilingual models on previous German shared tasks GermEval2018Fine and GermEval 2014 Our neural network architecture shown in Figure FIGREF31 resembles the original BERT model BIBREF1 and combines text and nontext features with a multilayer perceptron MLP The BERT architecture uses 12 hidden layers each layer consists of 768 units To derive contextualized representations from textual features the book title and blurb are concatenated and then fed through BERT To minimize the GPU memory consumption we limit the input length to 300 tokens which is shorter than BERTs hardcoded limit of 512 tokens Only 025 of blurbs in the training set consist of more than 300 words so this cutoff can be expected to have minor impact The nontext features are generated in a separate preprocessing step The metadata features are represented as a tendimensional vector two dimensions for gender see Section SECREF10 Author embedding vectors have a length of 200 see Section SECREF22 In the next step all three representations are concatenated and passed into a MLP with two layers 1024 units each and ReLu activation function During training the MLP is supposed to learn a nonlinear combination of its input representations Finally the output layer does the actual classification In the SoftMax output layer each unit corresponds to a class label For subtask A the output dimension is eight We treat subtask B as a standard multilabel classification problem i e we neglect any hierarchical information Accordingly the output layer for subtask B has 343 units When the value of an output unit is above a given threshold the corresponding label is predicted whereby thresholds are defined separately for each class The optimum was found by varying the threshold in steps of 01 in the interval from 0 to 1 Training is performed with batch size b16 dropout probability d01 learning rate eta 25 Adam optimizer and 5 training epochs These hyperparameters are the ones proposed by BIBREF1 for BERT finetuning We did not experiment with hyperparameter tuning ourselves except for optimizing the classification threshold for each class separately All experiments are run on a GeForce GTX 1080 Ti 11 GB whereby a single training epoch takes up to 10min If there is no single label for which prediction probability is above the classification threshold the most popular label Literatur Unterhaltung is used as prediction To compare against a relatively simple baseline we implemented a Logistic Regression classifier chain from scikitlearn BIBREF16 This baseline uses the text only and converts it to TFIDF vectors As with the BERT model it performs 8class multilabel classification for subtask A and 343class multilabel classification for subtask B ignoring the hierarchical aspect in the labels Table TABREF34 shows the results of our experiments As prescribed by the shared task the essential evaluation metric is the microaveraged F1score All scores reported in this paper are obtained using models that are trained on the training set and evaluated on the validation set For the final submission to the shared task competition the bestscoring setup is used and trained on the training and validation sets combined We are able to demonstrate that incorporating metadata features and author embeddings leads to better results for both subtasks With an F1score of 8720 for task A and 6470 for task B the setup using BERTGerman with metadata features and author embeddings 1 outperforms all other setups Looking at the precision score only BERTGerman with metadata features 2 but without author embeddings performs best In comparison to the baseline 7 our evaluation shows that deep transformer models like BERT considerably outperform the classical TFIDF approach also when the input is the same using the title and blurb only BERTGerman 4 and BERTMultilingual 5 are only using textbased features title and blurb whereby the text representations of the BERTlayers are directly fed into the classification layer To establish the information gain of author embeddings we train a linear classifier on author embeddings using this as the only feature The authoronly model 6 is exclusively evaluated on books for which author embeddings are available so the numbers are based on a slightly smaller validation set With an F1score of 6199 and 3213 for subtasks A and B respectively the author model yields the worst result However the information contained in the author embeddings help improve performance as the results of the bestperforming setup show When evaluating the best model 1 only on books for that author embeddings are available we find a further improvement with respect to F1 score task A from 8720 to 8781 task B 6470 to 6574 The best performing setup uses BERTGerman with metadata features and author embeddings In this setup the most data is made available to the model indicating that perhaps not surprisingly more data leads to better classification performance We expect that having access to the actual text of the book will further increase performance The average number of words per blurb is 95 and only 025 of books exceed our cutoff point of 300 words per blurb In addition the distribution of labeled books is imbalanced ie for many classes only a single digit number of training instances exist Fig FIGREF38 Thus this task can be considered a low resource scenario where including related data such as author embeddings and author identity features such as gender and academic title or making certain characteristics more explicit title and blurb length statistics helps Furthermore it should be noted that the blurbs do not provide summarylike abstracts of the book but instead act as teasers intended to persuade the reader to buy the book As reflected by the recent popularity of deep transformer models they considerably outperform the Logistic Regression baseline using TFIDF representation of the blurbs However for the simpler subtask A the performance difference between the baseline model and the multilingual BERT model is only six points while consuming only a fraction of BERTs computing resources The BERT model trained for German from scratch outperforms the multilingual BERT model by under three points for subtask A and over six points for subtask B confirming the findings reported by the creators of the BERTGerman models for earlier GermEval shared tasks While generally on par for subtask A for subtask B there is a relatively large discrepancy between precision and recall scores In all setups precision is considerably higher than recall We expect this to be down to the fact that for some of the 343 labels in subtask B there are very few instances This means that if the classifier predicts a certain label it is likely to be correct i e high precision but for many instances having lowfrequency labels this lowfrequency label is never predicted i e low recall As mentioned in Section SECREF30 we neglect the hierarchical nature of the labels and flatten the hierarchy with a depth of three levels to a single set of 343 labels for subtask B We expect this to have negative impact on performance because it allows a scenario in which for a particular book we predict a label from the first level and also a nonmatching label from the second level of the hierarchy The example Coenzym Q10 Table TABREF36 demonstrates this issue While the model correctly predicts the second level label Gesundheit Ernhrung health diet it misses the corresponding first level label Ratgeber advisor Given the models tendency to higher precision rather than recall in subtask B as a postprocessing step we may want to take the most detailed label on the third level of the hierarchy to be correct and manually fix the higher level labels accordingly We leave this for future work and note that we expect this to improve performance but it is hard to say by how much We hypothesize that an MLP with more and bigger layers could improve the classification performance However this would increase the number of parameters to be trained and thus requires more training data such as the books text itself or a summary of it In this paper we presented a way of enriching BERT with knowledge graph embeddings and additional metadata Exploiting the linked knowledge that underlies Wikidata improves performance for our task of document classification With this approach we improve the standard BERT models by up to four percentage points in accuracy Furthermore our results reveal that with taskspecific information such as author names and publication metadata improves the classification task essentially compared a textonly approach Especially when metadata feature engineering is less trivial adding additional taskspecific information from an external knowledge source such as Wikidata can help significantly The source code of our experiments and the trained models are publicly available Future work comprises the use of hierarchical information in a postprocessing step to refine the classification Another promising approach to tackle the low resource problem for task B would be to use label embeddings Many labels are similar and semantically related The relationships between labels can be utilized to model in a joint embedding space BIBREF17 However a severe challenge with regard to setting up label embeddings is the quite heterogeneous category system that can often be found in use online The Random House taxonomy see above includes category names i e labels that relate to several different dimensions including among others genre topic and function This work is done in the context of a larger project that develops a platform for curation technologies Under the umbrella of this project the classification of pieces of incoming text content according to an ontology is an important step that allows the routing of this content to particular specialized processing workflows including parameterising the included pipelines Depending on content type and genre it may make sense to apply OCR postprocessing for digitized books from centuries ago machine translation for content in languages unknown to the user information extraction or other particular and specialized procedures Constructing such a generic ontology for digital content is a challenging task and classification performance is heavily dependent on input data both in shape and amount and on the nature of the ontology to be used in the case of this paper the one predefined by the shared task organisers In the context of our project we continue to work towards a maximally generic content ontology and at the same time towards applied classification architectures such as the one presented in this paper This research is funded by the German Federal Ministry of Education and Research BMBF through the Unternehmen Region instrument Wachstumskern QURATOR grant no 03WKDA1A We would like to thank the anonymous reviewers for comments on an earlier version of this manuscript </s>
<s>[INST] Diachronic Topics in New High German Poetry [/INST] The Digital Library in the TextGrid Repository represents an extensive collection of German texts in digital form BIBREF3 It was mined from httpzenoorg and covers a time period from the mid 16th century up to the first decades of the 20th century It contains many important texts that can be considered as part of the literary canon even though it is far from complete eg it contains only half of Rilkes work We find that around 51k texts are annotated with the label verse TGRIDV not distinguishing between lyric verse and epic verse However the average length of these texts is around 150 token dismissing most epic verse tales Also the poems are distributed over 229 authors where the average author contributed 240 poems median 131 poems A drawback of TGRIDV is the circumstance that it contains a noticeable amount of French Dutch and Latin over 400 texts To constrain our dataset to German we filter foreign language material with a stopword list as training a dedicated language identification classifier is far beyond the scope of this work We approach diachronic variation of poetry from two perspectives First as distant reading task to visualize the development of clearly interpretable topics over time Second as a downstream task ie supervised machine learning task to determine the year the timeslot of publication for a given poem We infer topic distributions over documents as features and pit them against a simple style baseline We use the implementation of LDA as it is provided in genism BIBREF4 LDA assumes that a particular document contains a mixture of few salient topics where words are semantically related We transform our documents of wordforms to a bag of words representation filter stopwords function words and set the desired number of topics100 and train for 50 epochs to attain a reasonable distinctness of topics We choose 100 topics rather than a lower number that might be more straightforward to interpret as we want to later use these topics as features for downstream tasks We find that wordforms instead of lemma are more useful for poetry topic models as these capture style features rhyme orthographic variations hertz instead of herz and generally offer more interpretable results We retrieve the most important likely words for all 100 topics and interpret these sorted word lists as aggregated topics eg topic 27 figure 2 contains Tugend virtue Kunst art Ruhm fame Geist spirit Verstand mind and Lob praise This topic as a whole describes the concept of artistic virtue In certain clusters topics we find poetic residuals such that rhyme words often cluster together as they stand in proximity eg topic 52 with Mund mouth Grund cause ground rund round To discover trends of topics over time we bin our documents into time slots of 25 years width each See figure 1 for a plot of the number of documents per bin The chosen binning slots offer enough documents per slot for our experiments To visualize trends of singular topics over time we aggregate all documents d in slot s and add the probabilities of topic t given d and divide by the number of all d in s This gives us the average probability of a topic per timeslot We then plot the trajectories for each single topic See figures 26 for a selection of interpretable topic trends Please note that the scaling on the yaxis differ for each topic as some topics are more pronounced in the whole dataset overall Some topic plots are already very revealing The topic artistic virtue figure 2 left shows a sharp peak around 17001750 outlining the period of Enlightenment Several topics indicate Romanticism such as flowers figure 2 right song figure 3 left or dust ghosts depths not shown The period of Vormrz or Young Germany is quite clear with the topic German Nation figure 3 right It is however hardly distinguishable from romantic topics We find that the topics Beautiful Girls figure 4 left and Life Death figure 4 right are always quite present over time while Girls is more prounounced in Romanticism and Death in Barock We find that the topic Fire figure 5 left is a fairly modern concept that steadily rises into modernity possibly because of the trope love is fire Next to it the topic Family figure 5 right shows wild fluctuation over time Finally figure 6 shows topics that are most informative for the downstream classification task Topic 11 World Power Time left is very clearly a Barock topic ending at 1750 while topic 19 Heaven Depth Silence is a topic that rises from Romanticism into Modernity To test whether topic models can be used for dating poetry or attributing authorship we perform supervised classification experiments with Random Forest Ensemble classifiers We find that we obtain better results by training and testing on stanzas instead of full poems as we have more data available Also we use 50 year slots instead of 25 to ease the task For each document we determine a class label for a time slot The slot 15751624 receives the label 0 the slot 16251674 the label 1 etc In total we have 7 classes time slots As a baseline we implement rather straightforward style features such as line length poem length in token syllables lines cadence number of syllables of last word in line soundscape ratio of closed to open syllables see BIBREF5 and a proxy for metre the number of syllables of the first word in the line We split the data randomly 7030 trainingtesting where a 5050 shows 5 points worse performance We then train Random Forest Ensemble classifiers and perform a grid search over their parameters to determine the best classifier Please note that our class sizes are quite imbalanced The Style baseline achieves an Accuracy of 83 LDA features 89 and a combination of the two gets 90 However training on full poems reduces this to 4252 The most informative features by information gain are Topic11 067 Topic 37 055 Syllables Per Line 046 Length of poem in syllables 031 Topic19 029 Topic98 025 Topic27 virtue 023 and Soundscape 023 For authorship attribution we also use a 7030 random traintest split and use the author name as class label We only choose the most frequent 180 authors We find that training on stanzas gives us 71 Accuracy but when trained on full poems we only get 13 Accuracy It should be further investigated is this is only because of a surplus of data We have shown the viability of Latent Dirichlet Allocation for a visualization of topic trends the evolution of what people talk about in poetry While most topics are easily interpretable and show a clear trend others are quite noisy For an exploratory experiment the classification into time slots and for authors attribution is very promising however far from perfect It should be investigated whether using stanzas instead of whole poems only improves results because of more available data Also it needs to be determined if better topic models can deliver a better baseline for diachronic change in poetry and if better style features will outperform semantics Finally only selecting clear trending and peaking topics through covariance might further improve the results </s>
<s>[INST] Important Attribute Identification in Knowledge Graph [/INST] Knowledge graphKG has been proposed for several years and its most prominent application is in web search for example Google search triggers a certain entity card when a users query matches or mentions an entity based on some statistical model The core potential of a knowledge graph is about its capability of reasoning and inferring and we have not seen revolutionary breakthrough in such areas yet One main obstacle is obviously the lack of sufficient knowledge graph data including entities entities descriptions entities attributes and relationship between entities A full functional knowledge graph supporting general purposed reasoning and inference might still require long years of the communitys innovation and hardworking On the other hand many less demanding applications have great potential benefiting from the availability of information from the knowledge graph such as query understanding and document understanding in information retrievalsearch engines simple inference in question answering systems and easy reasoning in domainlimited decision support tools Not only academy but also industry companies have been heavily investing in knowledge graphs such as Googles knowledge graph Amazons product graph Facebooks Graph API IBMs Watson and Microsofts Satori etc In the existing knowledge graph such as Wikidata and DBpedia usually attributes do not have order or priorities and we dont know which attributes are more important and of more interest to users Such importance score of attributes is a vital piece of information in many applications of knowledge graph The most important application is the triggered entity card in search engine when a customers query gets hit for an entity An entity usually has a large amount of attributes but an entity card has limited space and can only show the most significant information attribute importances presence can make the displaying of an entity card easy to implement Attribute importance also has great potential of playing a significant role in search engine how to decide the matching score between the query and attribute values If the query matches a very important attribute and the relevance contribution from such a match should be higher than matching an ignorable attribute Another application is in ecommerce communications and one buyer initiates a communication cycle with a seller by sending a product enquiry Writing the enquiry on a mobile phone is inconvenient and automatic composing assistance has great potential of improving customer experience by alleviating the writing burden In the product enquiry customers need to specify their requirements and ask questions about products and their requirements and questions are usually about the most important attributes of the products If we can identify out important attributes of products we can help customers to draft the enquiry automatically to reduce their input time Many proposed approaches formulate the entity attribute ranking problem as a post processing step of automated attributevalue extraction In BIBREF0 BIBREF1 BIBREF2 Pasca et al firstly extract potential classattribute pairs using linguistically motivated patterns from unstructured text including query logs and query sessions and then score the attributes using the Bayes model In BIBREF3 Rahul Rai proposed to identify product attributes from customer online reviews using partofspeechPOS tagging patterns and to evaluate their importance with several different frequency metrics In BIBREF4 Lee et al developed a system to extract conceptattribute pairs from multiple data sources such as Probase general web documents query logs and external knowledge base and aggregate the weights from different sources into one consistent typicality score using a Ranking SVM model Those approaches typically suffer from the poor quality of the pattern rules and the ranking process is used to identify relatively more precise attributes from all attribute candidates As for an already existing knowledge graph there is plenty of work in literature dealing with ranking entities by relevance without or with a query In BIBREF5 Li et al introduced the OntoRank algorithm for ranking the importance of semantic web objects at three levels of granularity document terms and RDF graphs The algorithm is based on the rational surfer model successfully used in the Swoogle semantic web search engine In BIBREF6 Hogan et al presented an approach that adapted the wellknown PageRankHITS algorithms to semantic web data which took advantage of property values to rank entities In BIBREF7 BIBREF8 authors also focused on ranking entities sorting the semantic web resources based on importance relevance and query length and aggregating the features together with an overall ranking model Just a few works were designated to specifically address the problem of computing attribute rankings in a given Knowledge Graph Ibminer BIBREF9 introduced a tool for infoboxalias of an entity card template suggestion which collected attributes from different sources and then sorted them by popularity based on their cooccurrences in the dataset In BIBREF10 using the structured knowledge base intermediate features were computed including the importance or popularity of each entity type IDF computation for each attribute on a global basis IDF computation for entity types etc and then the features were aggregated to train a classifier Also a similar approach in BIBREF11 was designed with more features extracted from GoogleSuggestChars data In BIBREF12 Ali et al introduced a new set of features that utilizes semantic information about entities as well as information from topranked documents from a general search engine In order to experiment their approach they collected a dataset by exploiting Wikipedia infoboxes whose ordering of attributes reflect the collaborative effort of a large community of users which might not be accurate There have been broad researches on entity detection relationship extraction and also missing relationship prediction For example BIBREF13 BIBREF14 and BIBREF15 explained how to construct a knowledge graph and how to perform representation learning on knowledge graphs Some research has been performed on attribute extraction such as BIBREF16 and BIBREF4 the latter one is quite special that it also simultaneously computes the attribute importance As for modeling attribute importance for an existing knowledge graph which has completed attribute extractions we found only a few existing research all of which used simple cooccurrences to rank entity attributes In reality many knowledge graphs do not contain attribute importance information for example in the most famous Wikidata a large amount of entities have many attributes and it is difficult to know which attributes are significant and deserve more attention In this research we focus on identifying important attributes in existing knowledge graphs Specifically we propose a new method of using extra user generated data source for evaluating the attribute importance and we use the recently proposed stateoftheart wordsubword embedding techniques to match the external data with the attribute definition and values from entities in knowledge graphs And then we use the statistics obtained from the matching to compare the attribute importance Our method has general extensibility to any knowledge graph without attribute importance When there is a possibility of finding external textual data source our proposed method will work even if the external data does not exactly match the attribute textual data since the vector embedding performs semantic matching and does not require exact string matching The remaining of the paper is organized as follows Section SECREF2 explains our proposed method in detail including what kind of external data is required and how to process the external data and also how to perform the semantic matching and how to rank the attributes by statistics Section SECREF3 introduces our experimentations including our experimentation setup data introduction and experimental result compared to other methods we do not employ Section SECREF3 also briefly introduces our real world application scenario in ecommerce communication Section SECREF4 draws the conclusion from our experimentations and analysis and also we point out promising future research directions In this section we will introduce our proposed method in detail We use our application scenario to explain the logic behind the method but the scope is not limited to our use case and it is possible to extend to any existing knowledge graph without attribute importance information Alibabacom is currently the worlds largest crossborder business to businessB2B Ecommerce platform and it supports 17 languages for customers from all over the world On the website English is the dorminant language and accounts for around 50 of the traffic The website has already accumulated a very large knowledge graph of products and the entity here is the product or the product category and every entity has lots of information such as the entity name images and many attributes without ordering information The entities are also connected by taxonomy structure and similar products usually belong to the same categorysubcategory Since the B2B procurement usually involves a large amount of money the business will be a long process beginning with a product enquiry Generally speaking when customers are interested in some product they will start a communication cycle with a seller by sending a product enquiry to the seller In the product enquiry customers will specify their requirements and ask questions about the product Their requirements and questions usually refer to the most important attributes of the product Fig FIGREF5 shows an enquery example Alibabacom has accumulated tens of millions of product enquires and we would like to leverage these information in combination of the product knowledge graph we have to figure out the most important attributes for each category of products In our application scenario the product knowledge graph is the existing knowledge graph and the enquiry data is the external textual data source From now on we will use our application scenario to explain the details of our proposed algorithm We propose an unsupervised learning framework for extracting important product attributes from product enquiries By calculating the semantic similarity between each enquiry sentence and each attribute of the product to which the enquiry corresponds to we identify the product attributes that the customer cares about most The attributes described in the enquiry may contain attribute names or attribute values or other expressions for example either the word color or a color instance word purple is mentioned Therefore when calculating the semantic similarity between enquiry sentences and product attributes we need both attribute names and attribute values The same as any other knowledge graph the product attributes in our knowledge graph we use contain noises and mistakes We need to clean and normalize the attribute data before consuming it We will introduce the detail of our data cleaning process in Section SECREF14 FastText is a library created by the Facebook Research for efficient learning of word representations and sentence classification Here we just use the word representation functionality of it FastText models morphology by considering subword units and representing words by a sum of its character ngrams BIBREF17 In the original model the authors choose to use the binary logistic loss and the loss for a single instance is written as below INLINEFORM0 By denoting the logistic loss function INLINEFORM0 the loss over a sentence is INLINEFORM1 The scoring function between a word INLINEFORM0 and a context word INLINEFORM1 is INLINEFORM2 In the above functions INLINEFORM0 is a set of negative examples sampled from the vocabulary INLINEFORM1 is the set of indices of words surrounding word INLINEFORM2 INLINEFORM3 is the set of ngrams appearing in word INLINEFORM4 INLINEFORM5 is the size of the dictionary we have for ngrams INLINEFORM6 is a vector representation to each ngram INLINEFORM7 Compared with word2vec or glove FastText has following advantages It is able to cover rare words and outofvocabularyOOV words Since the basic modeling units in FastText are ngrams and both rare words and OOV ones can obtain efficient word representations from their composing ngrams Word2vec and glove both fail to provide accurate vector representations for these words In our application the training data is written by end customers and there are many misspellings which easily become OOV words Character ngrams embeddings tend to perform superior to word2vec and glove on smaller datasets FastText is more efficient and its training is relatively fast In this section how to compute the matching between an enquiry sentence and a product attribute is explained in detail Our explanation here is for a certain product category and other categories are the same As you can see in Fig FIGREF12 each sentence is compared with each attribute of a product category that the product belongs to We now get a score between a sentence INLINEFORM0 and an attribute INLINEFORM1 INLINEFORM2 INLINEFORM3 where INLINEFORM0 is all the possible values for this INLINEFORM1 INLINEFORM2 is the word vector for INLINEFORM3 According to this formula we can get top two attributes whose scores are above the threshold INLINEFORM4 for each sentence We choose two attributes instead of one because there may be more than one attribute for each sentence In addition some sentences are greetings or selfintroduction and do not contain the attribute information of the product so we require that the score to be higher than a certain threshold For our knowledge graph data entityproduct attributes can be roughly divided into clusters of transaction order specific ones and product specific ones in this paper we choose the product specific ones for further study We also need to point out that we only focus on the recommended communication language on the Alibabacom platform which is English To construct the evaluation dataset top 14 categories are first chosen based on their business promotion features and 3 millions typical products under each category were then chosen to form the attribute candidates After preprocessing and basic filtering top product specific attributes from the 14 different categories are chosen to be manually labeled by our annotators For each category annotators each are asked to choose at most 10 important attributes from buyers perspective After all annotators complete their annotations attributes are then sorted according to the summed votes In the end 111 important attributes from the 14 categories are kept for final evaluation Outside of the evaluation explained in this paper we actually have performed the matching on more than 4000 catetories covering more than 100 million products and more than 20 million enquires Due to limited annotation resources we can only sample a small numbered categories14 here to evaluate the proposed algorithm here The product enquiries and attributes data preprocessing is shown in Algorithm 1 algorithmAlgorithm Data Preprocess Algorithm 1 INLINEFORM0 INLINEFORM1 INLINEFORM2 INLINEFORM3 INLINEFORM4 INLINEFORM5 Invalid INLINEFORM6 filter INLINEFORM7 Split INLINEFORM8 to sentences sentence INLINEFORM9 in INLINEFORM10 INLINEFORM11 INLINEFORM12 return INLINEFORM13 Firstly for every product enquiry we convert the original html textual data into the plain text Secondly we filter out the useless enquires such as nonEnglish enquires and spams The regular expressions and spam detection are used to detect nonEnglish enquiries and spams respectively Thirdly we get sentence list INLINEFORM0 with spliting every enquiry into sentences as described in section 22 Then for every sentence INLINEFORM1 in INLINEFORM2 we need to do extra three processes aSpelling Correction bRegular Measures and Numbers cStop Words Dropping Spelling Correction Since quite a lot of the product enquires and selffilled attributes were misspelled we have replaced the exact words by fuzzyfied search using Levenshtein distance The method uses fuzzyfied search only if the exact match is not found Some attributes are actually the same such as type and product type we merge these same attributes by judging whether the attributes are contained Regular Measures and Numbers Attributes of number type have their values composed of numbers and units such as INLINEFORM0 INLINEFORM1 INLINEFORM2 INLINEFORM3 etc We replace all numbers in any notation eg floating point scientific arithmetical expression etc with a unique token INLINEFORM4 For the same reason each unit of measure is replaced with a corresponding token eg INLINEFORM5 is replaced with centimeter area Stop Words Dropping Stop words appear to be of little value in the proposed matching algorithm By removing the stop words we can focus on the important words instead In our business scenario we built a stop words list for foreign trade ecommerce Finally we get the valid sentences INLINEFORM0 The existing cooccurrence methods do not suit our application scenario at all since exact string matching is too strong a requirement and initial trial has shown its incompetency In stead we implemented an improved version of their method based on TextRank as our baseline In addition we also tested multiple semantic matching algorithms for comparison with our chosen method TextRank TextRank is a graphbased ranking model for text processing BIBREF18 It is an unsupervised algorithm for keyword extraction Since product attributes are usually the keywords in enquiries we can compare these keywords with the category attributes and find the most important attributes This method consists of three steps The first step is to merge all enquiries under one category as an article The second step is to extract the top 50 keywords for each category The third step is to find the most important attributes by comparing top keywords with category attributes Word2vec BIBREF19 We use the word vector trained by BIBREF19 as the distributed representation of words Then we get the enquiry sentence representation and category attribute representation Finally we collect the statistics about the matched attributes of each category and select the most frequent attributes under the same category GloVe BIBREF20 GloVe is a global logbilinear regression model for the unsupervised learning of word representations which utilizes the ratios of wordword cooccurrence probabilities We use the GloVe method to train the distributed representation of words And attribute selection procedure is the same as word2vec Proposed method the detail of our proposed algorithm has been carefully explained in Section SECREF2 There are several thresholds we need to pick in the experimentation setup Based on trial and error analysis we choose 075 as the sentence and attribute similarity threshold which balances the precision and recall relatively well In our application due to product enquiry length limitation customers usually dont refer to more than five attributes in their initial approach to the seller we choose to keep 5 most important attributes for each category Evaluation is conducted by comparing the output of the systems with the manual annotated answers and we calculate the precision and recall rate INLINEFORM0 INLINEFORM1 where INLINEFORM0 is the manually labeled attributes INLINEFORM1 is the detected important attributes Table 1 depicts the algorithm performance of each category and the overall average metrics among all categories for our approach and other methods It can be observed that our proposed method achieves the best performance The average F1measure of our approach is 047 while the average F1measure values of GloVe word2vect and TextRank are 046 042 and 020 respectively In all our experiments we find that FastText method outperforms other methods By analyzing all results we observe that semantic similarity based methods are more effective than the previous method which we implemented based on TextRank This conclusion is understandable because lots of enquiries do not simply mention attribute words exactly but some semantically related words are also used Evaluating FastText GloVe and word2vec we show that compared to other word representation learning algorithms the FastText performs best We sample and analyze the category attributes and find that many selffilled attributes contain misspellings The FastText algorithm represents words by a sum of its character ngrams and it is much robust against problems like misspellings In summary FastText has greater advantages in dealing with natural language corpus usually with spelling mistakes We also applied the detected attributes in the automatic enquiry generation task and we obtained significantly better generated enquiries compared to previous rigid templates Due to space limitation we skip the explanation and leave it for future publications In this paper we proposed a new general method of identifying important attributes for entities from a knowledge graph This is a relatively new task and our proposed method of using external textual data and performing semantic matching via wordsubword embeddings obtained better result compared to other work of using naive string matching and counting In addition we also successfully applied the detected important attributes in our real world application of smart composing In summary the method is extensible to any knowledge graph without attribute importance information and outperforms previous method In future work there are two major areas with potential of improving the detection accuracy The first one is about sentence splitting What we are trying to get is semantic cohesive unit which can be used to match an attribute and there might be more comprehensive method than the simple splitting by sentence ending punctuations The second one is about improving the word embedding quality We have implemented an inhouse improved version of Fasttext which is adapted to our data source It is highly possible to use the improved word embedding on purpose of obtaining higher semantic matching precision As for the application we will try to use more statistical models in the natural language generation part of the smart composing framework of consuming the detected important attributes </s>
<s>[INST] Diversity, Density, and Homogeneity: Quantitative Characteristic Metrics for Text Collections [/INST] Characteristic metrics are a set of unsupervised measures that quantitatively describe or summarize the properties of a data collection These metrics generally do not use groundtruth labels and only measure the intrinsic characteristics of data The most prominent example is descriptive statistics that summarizes a data collection by a group of unsupervised measures such as mean or median for central tendency variance or minimummaximum for dispersion skewness for symmetry and kurtosis for heavytailed analysis In recent years text classification a category of Natural Language Processing NLP tasks has drawn much attention BIBREF0 BIBREF1 BIBREF2 for its wideranging realworld applications such as fake news detection BIBREF3 document classification BIBREF4 and spoken language understanding SLU BIBREF5 BIBREF6 BIBREF7 a core task of conversational assistants like Amazon Alexa or Google Assistant However there are still insufficient characteristic metrics to describe a collection of texts Unlike numeric or categorical data simple descriptive statistics alone such as word counts and vocabulary size are difficult to capture the syntactic and semantic properties of a text collection In this work we propose a set of characteristic metrics diversity density and homogeneity to quantitatively summarize a collection of texts where the unit of texts could be a phrase sentence or paragraph A text collection is first mapped into a highdimensional embedding space Our characteristic metrics are then computed to measure the dispersion sparsity and uniformity of the distribution Based on the choice of embedding methods these characteristic metrics can help understand the properties of a text collection from different linguistic perspectives for example lexical diversity syntactic variation and semantic homogeneity Our proposed diversity density and homogeneity metrics extract hardtovisualize quantitative insight for a better understanding and comparison between text collections To verify the effectiveness of proposed characteristic metrics we first conduct a series of simulation experiments that cover various scenarios in twodimensional as well as highdimensional vector spaces The results show that our proposed quantitative characteristic metrics exhibit several desirable and intuitive properties such as robustness and linear sensitivity of the diversity metric with respect to random downsampling Besides we investigate the relationship between the characteristic metrics and the performance of a renowned model BERT BIBREF8 on the text classification task using two public benchmark datasets Our results demonstrate that there are high correlations between text classification model performance and the characteristic metrics which shows the efficacy of our proposed metrics A building block of characteristic metrics for text collections is the language representation method A classic way to represent a sentence or a paragraph is ngram with dimension equals to the size of vocabulary More advanced methods learn a relatively low dimensional latent space that represents each word or token as a continuous semantic vector such as word2vec BIBREF9 GloVe BIBREF10 and fastText BIBREF11 These methods have been widely adopted with consistent performance improvements on many NLP tasks Also there has been extensive research on representing a whole sentence as a vector such as a plain or weighted average of word vectors BIBREF12 skipthought vectors BIBREF13 and selfattentive sentence encoders BIBREF14 More recently there is a paradigm shift from noncontextualized word embeddings to selfsupervised language model LM pretraining Language encoders are pretrained on a large text corpus using a LMbased objective and then reused for other NLP tasks in a transfer learning manner These methods can produce contextualized word representations which have proven to be effective for significantly improving many NLP tasks Among the most popular approaches are ULMFiT BIBREF2 ELMo BIBREF15 OpenAI GPT BIBREF16 and BERT BIBREF8 In this work we adopt BERT a transformerbased technique for NLP pretraining as the backbone to embed a sentence or a paragraph into a representation vector Another stream of related works is the evaluation metrics for cluster analysis As measuring property or quality of outputs from a clustering algorithm is difficult human judgment with cluster visualization tools BIBREF17 BIBREF18 are often used There are unsupervised metrics to measure the quality of a clustering result such as the CalinskiHarabasz score BIBREF19 the DaviesBouldin index BIBREF20 and the Silhouette coefficients BIBREF21 Complementary to these works that model crosscluster similarities or relationships our proposed diversity density and homogeneity metrics focus on the characteristics of each single cluster ie intra cluster rather than inter cluster relationships We introduce our proposed diversity density and homogeneity metrics with their detailed formulations and key intuitions Our first assumption is for classification highquality training data entail that examples of one class are as differentiable and distinct as possible from another class From a finegrained and intraclass perspective a robust text cluster should be diverse in syntax which is captured by diversity And each example should reflect a sufficient signature of the class to which it belongs that is each example is representative and contains certain salient features of the class We define a density metric to account for this aspect On top of that examples should also be semantically similar and coherent among each other within a cluster where homogeneity comes in play The more subtle intuition emerges from the interclass viewpoint When there are two or more class labels in a text collection in an ideal scenario we would expect the homogeneity to be monotonically decreasing Potentially the diversity is increasing with respect to the number of classes since text clusters should be as distinct and separate as possible from one another If there is a significant ambiguity between classes the behavior of the proposed metrics and a possible new metric as a interclass confusability measurement remain for future work In practice the input is a collection of texts lbrace x1 x2 xmrbrace where xi is a sequence of tokens xi1 xi2 xil denoting a phrase a sentence or a paragraph An embedding method mathcal E then transforms xi into a vector mathcal Exiei and the characteristic metrics are computed with the embedding vectors For example Note that these embedding vectors often lie in a highdimensional space eg commonly over 300 dimensions This motivates our design of characteristic metrics to be sensitive to text collections of different properties while being robust to the curse of dimensionality We then assume a set of clusters created over the generated embedding vectors In classification tasks the embeddings pertaining to members of a class form a cluster ie in a supervised setting In an unsupervised setting we may apply a clustering algorithm to the embeddings It is worth noting that in general the metrics are independent of the assumed underlying grouping method Embedding vectors of a given group of texts lbrace e1 emrbrace can be treated as a cluster in the highdimensional embedding space We propose a diversity metric to estimate the clusters dispersion or spreadness via a generalized sense of the radius Specifically if a cluster is distributed as a multivariate Gaussian with a diagonal covariance matrix Sigma the shape of an isocontour will be an axisaligned ellipsoid in mathbb RH Such isocontours can be described as where x are all possible points in mathbb RH on an isocontour c is a constant mu is a given mean vector with mu j being the value along jth axis and sigma 2j is the variance of the jth axis We leverage the geometric interpretation of this formulation and treat the square root of variance ie standard deviation sqrtsigma 2j as the radius rj of the ellipsoid along the jth axis The diversity metric is then defined as the geometric mean of radii across all axes where sigma i is the standard deviation or square root of the variance along the ith axis In practice to compute a diversity metric we first calculate the standard deviation of embedding vectors along each dimension and take the geometric mean of all calculated values Note that as the geometric mean acts as a dimensionality normalization it makes the diversity metric work well in highdimensional embedding spaces such as BERT Another interesting characteristic is the sparsity of the text embedding cluster The density metric is proposed to estimate the number of samples that falls within a unit of volume in an embedding space Following the assumption mentioned above a straightforward definition of the volume can be written as up to a constant factor However when the dimension goes higher this formulation easily produces exploding or vanishing density values ie goes to infinity or zero To accommodate the impact of highdimensionality we impose a dimension normalization Specifically we introduce a notion of effective axes which assumes most variance can be explained or captured in a subspace of a dimension sqrtH We group all the axes in this subspace together and compute the geometric mean of their radii as the effective radius The dimensionnormalized volume is then formulated as Given a set of embedding vectors lbrace e1 emrbrace we define the density metric as In practice the computed density metric values often follow a heavytailed distribution thus sometimes its log value is reported and denoted as density logscale The homogeneity metric is proposed to summarize the uniformity of a cluster distribution That is how uniformly the embedding vectors of the samples in a group of texts are distributed in the embedding space We propose to quantitatively describe homogeneity by building a fullyconnected edgeweighted network which can be modeled by a Markov chain model A Markov chains entropy rate is calculated and normalized to be in 0 1 range by dividing by the entropys theoretical upper bound This output value is defined as the homogeneity metric detailed as follows To construct a fullyconnected network from the embedding vectors lbrace e1 emrbrace we compute their pairwise distances as edge weights an idea similar to AttriRank BIBREF22 As the Euclidean distance is not a good metric in highdimensions we normalize the distance by adding a power log ndim We then define a Markov chain model with the weight of edgei j being and the conditional probability of transition from i to j can be written as All the transition probabilities pi rightarrow j are from the transition matrix of a Markov chain An entropy of this Markov chain can be calculated as where nu i is the stationary distribution of the Markov chain As selftransition probability pi rightarrow i is always zero because of zero distance there are m 1 possible destinations and the entropys theoretical upper bound becomes Our proposed homogeneity metric is then normalized into 0 1 as a uniformity measure The intuition is that if some samples are close to each other but far from all the others the calculated entropy decreases to reflect the unbalanced distribution In contrast if each sample can reach other samples within moreorless the same distances the calculated entropy as well as the homogeneity measure would be high as it implies the samples could be more uniformly distributed To verify that each proposed characteristic metric holds its desirable and intuitive properties we conduct a series of simulation experiments in 2dimensional as well as 768dimensional spaces The latter has the same dimensionality as the output of our chosen embedding methodBERT in the following Experiments section The base simulation setup is a randomly generated isotropic Gaussian blob that contains 10000 data points with the standard deviation along each axis to be 10 and is centered around the origin All Gaussian blobs are created using makeblobs function in the scikitlearn package Four simulation scenarios are used to investigate the behavior of our proposed quantitative characteristic metrics Downsampling Downsample the base cluster to be lbrace 90 80 10rbrace of its original size That is create Gaussian blobs with lbrace 9000 1000rbrace data points Varying Spread Generate Gaussian blobs with standard deviations of each axis to be lbrace 20 30 100rbrace Outliers Add lbrace 50 100 500rbrace outlier data points ie lbrace 05 5rbrace of the original cluster size randomly on the surface with a fixed norm or radius Multiple Subclusters Along the 1thaxis with 10000 data points in total create lbrace 1 2 10rbrace clusters with equal sample sizes but at increasing distance For each scenario we simulate a cluster and compute the characteristic metrics in both 2dimensional and 768dimensional spaces Figure FIGREF17 visualizes each scenario by tdistributed Stochastic Neighbor Embedding tSNE BIBREF23 The 768dimensional simulations are visualized by downprojecting to 50 dimensions via Principal Component Analysis PCA followed by tSNE Figure FIGREF24 summarizes calculated diversity metrics in the first row density metrics in the second row and homogeneity metrics in the third row for all simulation scenarios The diversity metric is robust as its values remain almost the same to the downsampling of an input cluster This implies the diversity metric has a desirable property that it is insensitive to the size of inputs On the other hand it shows a linear relationship to varying spreads It is another intuitive property for a diversity metric that it grows linearly with increasing dispersion or variance of input data With more outliers or more subclusters the diversity metric can also reflect the increasing dispersion of cluster distributions but is less sensitive in highdimensional spaces For the density metrics it exhibits a linear relationship to the size of inputs when downsampling which is desired When increasing spreads the trend of density metrics corresponds well with human intuition Note that the density metrics decrease at a much faster rate in higherdimensional space as logscale is used in the figure The density metrics also drop when adding outliers or having multiple distant subclusters This makes sense since both scenarios should increase the dispersion of data and thus increase our notion of volume as well In multiple subcluster scenario the density metric becomes less sensitive in the higherdimensional space The reason could be that the subclusters are distributed only along one axis and thus have a smaller impact on volume in higherdimensional spaces As random downsampling or increasing variance of each axis should not affect the uniformity of a cluster distribution we expect the homogeneity metric remains approximately the same values And the proposed homogeneity metric indeed demonstrates these ideal properties Interestingly for outliers we first saw huge drops of the homogeneity metric but the values go up again slowly when more outliers are added This corresponds well with our intuitions that a small number of outliers break the uniformity but more outliers should mean an increase of uniformity because the distribution of added outliers themselves has a high uniformity For multiple subclusters as more subclusters are presented the homogeneity should and does decrease as the data are less and less uniformly distributed in the space To sum up from all simulations our proposed diversity density and homogeneity metrics indeed capture the essence or intuition of dispersion sparsity and uniformity in a cluster distribution The two realworld text classification tasks we used for experiments are sentiment analysis and Spoken Language Understanding SLU BERT is a selfsupervised language model pretraining approach based on the Transformer BIBREF24 a multiheaded selfattention architecture that can produce different representation vectors for the same token in various sequences ie contextual embeddings When pretraining BERT concatenates two sequences as input with special tokens CLS SEP EOS denoting the start separation and end respectively BERT is then pretrained on a large unlabeled corpus with objectivemasked language model MLM which randomly masks out tokens and the model predicts the masked tokens The other classification task is next sentence prediction NSP NSP is to predict whether two sequences follow each other in the original text or not In this work we use the pretrained textBERTtextBASE which has 12 layers L 12 selfattention heads A and 768 hidden dimension H as the language embedding to compute the proposed data metrics The offtheshelf pretrained BERT is obtained from GluonNLP For each sequence xi xi1 xil with length l BERT takes CLS xi1 xil EOS as input and generates embeddings lbrace eCLS ei1 eil eEOSrbrace at the token level To obtain the sequence representation we use a mean pooling over token embeddings where ei in mathbb RH A text collection lbrace x1 xmrbrace ie a set of token sequences is then transformed into a group of Hdimensional vectors lbrace e1 emrbrace We compute each metric as described previously using three BERT layers L1 L6 and L12 as the embedding space respectively The calculated metric values are averaged over layers for each class and averaged over classes weighted by class size as the final value for a dataset In the first task we use the SST2 Stanford Sentiment Treebank version 2 dataset BIBREF25 to conduct sentiment analysis experiments SST2 is a sentence binary classification dataset with traindevtest splits provided and two types of sentence labels ie positive and negative The second task involves two essential problems in SLU which are intent classification IC and slot labeling SL In IC the model needs to detect the intention of a text input ie utterance conveys For example for an input of I want to book a flight to Seattle the intention is to book a flight ticket hence the intent class is bookFlight In SL the model needs to extract the semantic entities that are related to the intent From the same example Seattle is a slot value related to booking the flight ie the destination Here we experiment with the Snips dataset BIBREF26 which is widely used in SLU research This dataset contains test spoken utterances text classified into one of 7 intents In both tasks we used the opensourced GluonNLP BERT model to perform text classification For evaluation sentiment analysis is measured in accuracy whereas IC and SL are measured in accuracy and F1 score respectively BERT is finetuned on traindev sets and evaluated on test sets We downsampled SST2 and Snips training sets from 100 to 10 with intervals being 10 BERTs performance is reported for each downsampled setting in Table TABREF29 and Table TABREF30 We used entire test sets for all model evaluations To compare we compute the proposed data metrics ie diversity density and homogeneity on the original and the downsampled training sets We will discuss the three proposed characteristic metrics ie diversity density and homogeneity and model performance scores from downsampling experiments on the two public benchmark datasets in the following subsections In Table TABREF29 the sentiment classification accuracy is 9266 without downsampling which is consistent with the reported GluonNLP BERT model performance on SST2 It also indicates SST2 training data are differentiable between label classes ie from the positive class to the negative class which satisfies our assumption for the characteristic metrics Decreasing the training set size does not reduce performance until it is randomly downsampled to only 20 of the original size Meanwhile density and homogeneity metrics also decrease significantly highlighted in bold in Table TABREF29 implying a clear relationship between these metrics and model performance In Table TABREF30 the Snips dataset seems to be distinct between ICSL classes since the IC accurcy and SL F1 are as high as 9871 and 9606 without downsampling respectively Similar to SST2 this implies that Snips training data should also support the interclass differentiability assumption for our proposed characteristic metrics IC accuracy on Snips remains higher than 98 until we downsample the training set to 20 of the original size In contrast SL F1 score is more sensitive to the downsampling of the training set as it starts decreasing when downsampling When the training set is only 10 left SL F1 score drops to 8720 The diversity metric does not decrease immediately until the training set equals to or is less than 40 of the original set This implies that random sampling does not impact the diversity if the sampling rate is greater than 40 The training set is very likely to contain redundant information in terms of text diversity This is supported by what we observed as model has consistently high ICSL performances between 40100 downsampling ratios Moreover the biggest drop of density and homogeneity highlighted in bold in Table TABREF30 highly correlates with the biggest ICSL drop at the point the training set size is reduced from 20 to 10 This suggests that our proposed metrics can be used as a good indicator of model performance and for characterizing text datasets We calculate and show in Table TABREF35 the Pearsons correlations between the three proposed characteristic metrics ie diversity density and homogeneity and model performance scores from downsampling experiments in Table TABREF29 and Table TABREF30 Correlations higher than 05 are highlighted in bold As mentioned before model performance is highly correlated with density and homogeneity both are computed on the train set Diversity is only correlated with Snips SL F1 score at a moderate level These are consistent with our simulation results which shows that random sampling of a dataset does not necessarily affect the diversity but can reduce the density and marginally homogeneity due to the decreasing of data points in the embedding space However the simultaneous huge drops of model performance density and homogeneity imply that there is only limited redundancy and more informative data points are being thrown away when downsampling Moreover results also suggest that model performance on text classification tasks corresponds not only with data diversity but also with training data density and homogeneity as well In this work we proposed several characteristic metrics to describe the diversity density and homogeneity of text collections without using any labels Pretrained language embeddings are used to efficiently characterize text datasets Simulation and experiments showed that our intrinsic metrics are robust and highly correlated with model performance on different text classification tasks We would like to apply the diversity density and homogeneity metrics for text data augmentation and selection in a semisupervised manner as our future work </s>
<s>[INST] What Drives the International Development Agenda? An NLP Analysis of the United Nations General Debate 1970-2016 [/INST] Decisions made in international organisations are fundamental to international development efforts and initiatives It is in these global governance arenas that the rules of the global economic system which have a huge impact on development outcomes are agreed on decisions are made about largescale funding for development issues such as health and infrastructure and key development goals and targets are agreed on as can be seen with the Millennium Development Goals MDGs More generally international organisations have a profound influence on the ideas that shape international development efforts BIBREF0 Yet surprisingly little is known about the agendasetting process for international development in global governance institutions This is perhaps best demonstrated by the lack of information on how the different goals and targets of the MDGs were decided which led to much criticism and concern about the global governance of development BIBREF1 More generally we know little about the types of development issues that different countries prioritise or whether countryspecific factors such as wealth or democracy make countries more likely to push for specific development issues to be put on the global political agenda The lack of knowledge about the agenda setting process in the global governance of development is in large part due to the absence of obvious data sources on states preferences about international development issues To address this gap we employ a novel approach based on the application of natural language processing NLP to countries speeches in the UN Every September the heads of state and other highlevel country representatives gather in New York at the start of a new session of the United Nations General Assembly UNGA and address the Assembly in the General Debate The General Debate GD provides the governments of the almost two hundred UN member states with an opportunity to present their views on key issues in international politics including international development As such the statements made during GD are an invaluable and largely untapped source of information on governments policy preferences on international development over time An important feature of these annual country statements is that they are not institutionally connected to decisionmaking in the UN This means that governments face few external constraints when delivering these speeches enabling them to raise the issues that they consider the most important Therefore the General Debate acts as a barometer of international opinion on important issues even those not on the agenda for that particular session BIBREF2 In fact the GD is usually the first item for each new session of the UNGA and as such it provides a forum for governments to identify likeminded members and to put on the record the issues they feel the UNGA should address Therefore the GD can be viewed as a key forum for governments to put different policy issues on international agenda We use a new dataset of GD statements from 1970 to 2016 the UN General Debate Corpus UNGDC to examine the international development agenda in the UN BIBREF3 Our application of NLP to these statements focuses in particular on structural topic models STMs BIBREF4 The paper makes two contributions using this approach 1 It sheds light on the main international development issues that governments prioritise in the UN and 2 It identifies the key countryspecific factors associated with governments discussing development issues in their GD statements In the analysis we consider the nature of international development issues raised in the UN General Debates and the effect of structural covariates on the level of developmental rhetoric in the GD statements To do this we first implement a structural topic model BIBREF4 This enables us to identify the key international development topics discussed in the GD We model topic prevalence in the context of the structural covariates In addition we control for region fixed effects and time trend The aim is to allow the observed metadata to affect the frequency with which a topic is discussed in General Debate speeches This allows us to test the degree of association between covariates and regiontime effects and the average proportion of a document discussing a topic We assess the optimal number of topics that need to be specified for the STM analysis We follow the recommendations of the original STM paper and focus on exclusivity and semantic coherence measures BIBREF5 propose semantic coherence measure which is closely related to pointwise mutual information measure posited by BIBREF6 to evaluate topic quality BIBREF5 show that semantic coherence corresponds to expert judgments and more general human judgments in Amazons Mechanical Turk experiments Exclusivity scores for each topic follows BIBREF7 Highly frequent words in a given topic that do not appear very often in other topics are viewed as making that topic exclusive Cohesive and exclusive topics are more semantically useful Following BIBREF8 we generate a set of candidate models ranging between 3 and 50 topics We then plot the exclusivity and semantic coherence numbers closer to 0 indicate higher coherence with a linear regression overlaid Figure FIGREF3 Models above the regression line have a better exclusivitysemantic coherence trade off We select the 16topic model which has the largest positive residual in the regression fit and provides higher exclusivity at the same level of semantic coherence The topic quality is usually evaluated by highest probability words which is presented in Figure FIGREF4 Figure FIGREF4 provides a list of the main topics and the highest probability words associated these topics that emerge from the STM of UN General Debate statements In addition to the highest probability words we use several other measures of key words not presented here to interpret the dimensions This includes the FREX metric which combines exclusivity and word frequency the lift which gives weight to words that appear less frequently in other topics and the score which divides the log frequency of the word in the topic by the log frequency of the word in other topics We provide a brief description of each of the 16 topics here Topic 1 Security and cooperation in Europe The first topic is related to issues of security and cooperation with a focus on Central and Eastern Europe Topic 2 Economic development and the global system This topic is related to economic development particularly around the global economic system The focus on trade growth econom product growth financ and etc suggests that Topic 2 represent a more traditional view of international development in that the emphasis is specifically on economic processes and relations Topic 3 Nuclear disarmament This topic picks up the issue of nuclear weapons which has been a major issue in the UN since its founding Topic 4 Postconflict development This topic relates to postconflict development The countries that feature in the key words eg Rwanda Liberia Bosnia have experienced devastating civil wars and the emphasis on words such as develop peace hope and democrac suggest that this topic relates to how these countries recover and move forward Topic 5 African independence decolonisation This topic picks up the issue of African decolonisation and independence It includes the issue of apartheid in South Africa as well as racism and imperialism more broadly Topic 6 Africa While the previous topic focused explicitly on issues of African independence and decolonisation this topic more generally picks up issues linked to Africa including peace governance security and development Topic 7 Sustainable development This topic centres on sustainable development picking up various issues linked to development and climate change In contrast to Topic 2 this topic includes some of the newer issues that have emerged in the international development agenda such as sustainability gender education work and the MDGs Topic 8 Functional topic This topic appears to be comprised of functional or processoriented words eg problem solution effort general etc Topic 9 War This topic directly relates to issues of war The key words appear to be linked to discussions around ongoing wars Topic 10 Conflict in the Middle East This topic clearly picks up issues related to the Middle East particularly around peace and conflict in the Middle East Topic 11 Latin America This is another topic with a regional focus picking up on issues related to Latin America Topic 12 Commonwealth This is another of the less obvious topics to emerge from the STM in that the key words cover a wide range of issues However the places listed eg Australia Sri Lanka Papua New Guinea suggest the topic is related to the Commonwealth or former British colonies Topic 13 International security This topic broadly captures international security issues eg terrorism conflict peace and in particularly the international response to security threats such as the deployment of peacekeepers Topic 14 International law This topic picks up issues related to international law particularly connected to territorial disputes Topic 15 Decolonisation This topic relates more broadly to decolonisation As well as specific mention of decolonisation the key words include a range of issues and places linked to the decolonisation process Topic 16 Cold War This is another of the less tightly defined topics The topics appears to pick up issues that are broadly related to the Cold War There is specific mention of the Soviet Union and detente as well as issues such as nuclear weapons and the Helsinki Accords Based on these topics we examine Topic 2 and Topic 7 as the principal international development topics While a number of other topics for example postconflict development Africa Latin America etc are related to development issues Topic 2 and Topic 7 most directly capture aspects of international development We consider these two topics more closely by contrasting the main words linked to these two topics In Figure FIGREF6 the word clouds show the 50 words most likely to mentioned in relation to each of the topics The word clouds provide further support for Topic 2 representing a more traditional view of international development focusing on economic processes In addition to a strong emphasis on econom other key words such as trade debt market growth industri financi technolog product and argicultur demonstrate the narrower economic focus on international development captured by Topic 2 In contrast Topic 7 provides a much broader focus on development with key words including climat sustain environ educ health women work mdgs peac govern and right Therefore Topic 7 captures many of the issues that feature in the recent Sustainable Development Goals SDGs agenda BIBREF9 Figure FIGREF7 calculates the difference in probability of a word for the two topics normalized by the maximum difference in probability of any word between the two topics The figure demonstrates that while there is a much high probability of words such as econom trade and even develop being used to discuss Topic 2 words such as climat govern sustain goal and support being used in association with Topic 7 This provides further support for the Topic 2 representing a more economistic view of international development while Topic 7 relating to a broader sustainable development agenda We also assess the relationship between topics in the STM framework which allows correlations between topics to be examined This is shown in the network of topics in Figure FIGREF8 The figure shows that Topic 2 and Topic 7 are closely related which we would expect as they both deal with international development and share key words on development such as develop povert etc It is also worth noting that while Topic 2 is more closely correlated with the Latin America topic Topic 11 Topic 7 is more directly correlated with the Africa topic Topic 6 We next look at the relationship between topic proportions and structural factors The data for these structural covariates is taken from the World Banks World Development Indicators WDI unless otherwise stated Confidence intervals produced by the method of composition in STM allow us to pick up statistical uncertainty in the linear regression model Figure FIGREF9 demonstrates the effect of wealth GDP per capita on the the extent to which states discuss the two international development topics in their GD statements The figure shows that the relationship between wealth and the topic proportions linked to international development differs across Topic 2 and Topic 7 Discussion of Topic 2 economic development remains far more constant across different levels of wealth than Topic 7 The poorest states tend to discuss both topics more than other developing nations However this effect is larger for Topic 7 There is a decline in the proportion of both topics as countries become wealthier until around 30000 when there is an increase in discussion of Topic 7 There is a further pronounced increase in the extent countries discuss Topic 7 at around 60000 per capita However there is a decline in expected topic proportions for both Topic 2 and Topic 7 for the very wealthiest countries Figure FIGREF10 shows the expected topic proportions for Topic 2 and Topic 7 associated with different population sizes The figure shows a slight surge in the discussion of both development topics for countries with the very smallest populations This reflects the significant amount of discussion of development issues particularly sustainable development Topic 7 by the small island developing states SIDs The discussion of Topic 2 remains relatively constant across different population sizes with a slight increase in the expected topic proportion for the countries with the very largest populations However with Topic 7 there is an increase in expected topic proportion until countries have a population of around 300 million after which there is a decline in discussion of Topic 7 For countries with populations larger than 500 million there is no effect of population on discussion of Topic 7 It is only with the very largest populations that we see a positive effect on discussion of Topic 7 We would also expect the extent to which states discuss international development in their GD statements to be impacted by the amount of aid or official development assistance ODA they receive Figure FIGREF11 plots the expected topic proportion according to the amount of ODA countries receive Broadlyspeaking the discussion of development topics remains largely constant across different levels of ODA received There is however a slight increase in the expected topic proportions of Topic 7 according to the amount of ODA received It is also worth noting the spikes in discussion of Topic 2 and Topic 7 for countries that receive negative levels of ODA These are countries that are effectively repaying more in loans to lenders than they are receiving in ODA These countries appear to raise development issues far more in their GD statements which is perhaps not altogether surprising We also consider the effects of democracy on the expected topic proportions of both development topics using the Polity IV measure of democracy BIBREF10 Figure FIGREF12 shows the extent to which states discuss the international development topics according to their level of democracy Discussion of Topic 2 is fairly constant across different levels of democracy although there are some slight fluctuations However the extent to which states discuss Topic 7 sustainable development varies considerably across different levels of democracy Somewhat surprisingly the most autocratic states tend to discuss Topic 7 more than the slightly less autocratic states This may be because highly autocratic governments choose to discuss development and environmental issues to avoid a focus on democracy and human rights There is then an increase in the expected topic proportion for Topic 7 as levels of democracy increase reaching a peak at around 5 on the Polity scale after this there is a gradual decline in discussion of Topic 7 This would suggest that democratizing or semidemocratic countries which are more likely to be developing countries with democratic institutions discuss sustainable development more than established democracies that are more likely to be developed countries We also plot the results of the analysis as the difference in topic proportions for two different values of the effect of conflict Our measure of whether a country is experiencing a civil conflict comes from the UCDPPRIO Armed Conflict Dataset BIBREF11 Point estimates and 95 confidence intervals are plotted in Figure FIGREF13 The figure shows that conflict affects only Topic 7 and not Topic 2 Countries experiencing conflict are less likely to discuss Topic 7 sustainable development than countries not experiencing conflict The most likely explanation is that these countries are more likely to devote a greater proportion of their annual statements to discussing issues around conflict and security than development The fact that there is no effect of conflict on Topic 2 is interesting in this regard Finally we consider regional effects in Figure FIGREF14 We use the World Banks classifications of regions Latin America and the Caribbean LCN South Asia SAS SubSaharan Africa SSA Europe and Central Asia ECS Middle East and North Africa MEA East Asia and the Pacific EAS North America NAC The figure shows that states in South Asia and Latin America and the Caribbean are likely to discuss Topic 2 the most States in South Asia and East Asia and the Pacific discuss Topic 7 the most The figure shows that countries in North America are likely to speak about Topic 7 least The analysis of discussion of international development in annual UN General Debate statements therefore uncovers two principle development topics economic development and sustainable development We find that discussion of Topic 2 is not significantly impacted by countryspecific factors such as wealth population democracy levels of ODA and conflict although there are regional effects However we find that the extent to which countries discuss sustainable development Topic 7 in their annual GD statements varies considerably according to these different structural factors The results suggest that broadlyspeaking we do not observe linear trends in the relationship between these countryspecific factors and discussion of Topic 7 Instead we find that there are significant fluctuations in the relationship between factors such as wealth democracy etc and the extent to which these states discuss sustainable development in their GD statements These relationships require further analysis and exploration Despite decisions taken in international organisations having a huge impact on development initiatives and outcomes we know relatively little about the agendasetting process around the global governance of development Using a novel approach that applies NLP methods to a new dataset of speeches in the UN General Debate this paper has uncovered the main development topics discussed by governments in the UN and the structural factors that influence the degree to which governments discuss international development In doing so the paper has shed some light on state preferences regarding the international development agenda in the UN The paper more broadly demonstrates how text analytic approaches can help us to better understand different aspects of global governance </s>
<s>[INST] QnAMaker: Data to Bot in 2 Minutes [/INST] QnAMaker aims to simplify the process of bot creation by extracting QuestionAnswer QA pairs from data given by users into a Knowledge Base KB and providing a conversational layer over it KB here refers to one instance of azure search index where the extracted QA are stored Whenever a developer creates a KB using QnAMaker they automatically get all NLP capabilities required to answer users queries There are other systems such as Googles Dialogflow IBMs Watson Discovery which tries to solve this problem QnAMaker provides unique features for the ease of development such as the ability to add a personabased chitchat layer on top of the bot Additionally bot developers get automatic feedback from the system based on enduser traffic and interaction which helps them in enriching the KB we call this feature activelearning Our system also allows user to add MultiTurn structure to KB using hierarchical extraction and contextual ranking QnAMaker today supports over 35 languages and is the only system among its competitors to follow a ServerClient architecture all the KB data rests only in the clients subscription giving users total control over their data QnAMaker is part of Microsoft Cognitive Service and currently runs using the Microsoft Azure Stack As shown in Figure FIGREF4 humans can have two different kinds of roles in the system BotDevelopers who want to create a bot using the data they have and EndUsers who will chat with the bots created by botdevelopers The components involved in the process are QnAMaker Portal This is the Graphical User Interface GUI for using QnAMaker This website is designed to ease the use of management APIs It also provides a test pane QnaMaker Management APIs This is used for the extraction of QuestionAnswer QA pairs from semistructured content It then passes these QA pairs to the web app to create the Knowledge Base Index Azure Search Index Stores the KB with questions and answers as indexable columns thus acting as a retrieval layer QnaMaker WebApp Acts as a layer between the Bot Management APIs and Azure Search Index WebApp does ranking on top of retrieved results WebApp also handles feedback management for active learning Bot Calls the WebApp with the Users query to get results Creating a bot is a 3step process for a bot developer Create a QnaMaker Resource in Azure This creates a WebApp with binaries required to run QnAMaker It also creates an Azure Search Service for populating the index with any given knowledge base extracted from user data Use Management APIs to CreateUpdateDelete your KB The Create API automatically extracts the QA pairs and sends the Content to WebApp which indexes it in Azure Search Index Developers can also add personabased chat content and synonyms while creating and updating their KBs Bot Creation Create a bot using any framework and call the WebApp hosted in Azure to get your queries answered There are BotFramework templates provided for the same The Extraction component is responsible for understanding a given document and extracting potential QA pairs These QA pairs are in turn used to create a KB to be consumed later on by the QnAMaker WebApp to answer user queries First the basic blocks from given documents such as text lines are extracted Then the layout of the document such as columns tables lists paragraphs etc is extracted This is done using Recursive XY cut BIBREF0 Following Layout Understanding each element is tagged as headers footers table of content index watermark table image table caption image caption heading heading level and answers Agglomerative clustering BIBREF1 is used to identify heading and hierarchy to form an intent tree Leaf nodes from the hierarchy are considered as QA pairs In the end the intent tree is further augmented with entities using CRFbased sequence labeling Intents that are repeated in and across documents are further augmented with their parent intent adding more context to resolve potential ambiguity QnAMaker uses Azure Search Index as its retrieval layer followed by reranking on top of retrieved results Figure FIGREF21 Azure Search is based on inverted indexing and TFIDF scores Azure Search provides fuzzy matching based on editdistance thus making retrieval robust to spelling mistakes It also incorporates lemmatization and normalization These indexes can scale up to millions of documents lowering the burden on QnAMaker WebApp which gets less than 100 results to rerank Different customers may use QnAMaker for different scenarios such as banking task completion answering FAQs on company policies or fun and engagement The number of QAs length of questions and answers number of alternate questions per QA can vary significantly across different types of content Thus the ranker model needs to use features that are generic enough to be relevant across all use cases The preprocessing layer uses components such as Language Detection Lemmatization Speller and Word Breaker to normalize user queries It also removes junk characters and stopwords from the users query Going into granular features and the exact empirical formulas used is out of the scope of this paper The broad level features used while ranking are WordNet There are various features generated using WordNet BIBREF2 matching with questions and answers This takes care of wordlevel semantics For instance if there is information about price of furniture in a KB and the enduser asks about price of table the user will likely get a relevant answer The scores of these WordNet features are calculated as a function of Distance of 2 words in the WordNet graph Distance of Lowest Common Hypernym from the root KnowledgeBase word importance Local IDFs Global word importance Global IDFs This is the most important feature in our model as it has the highest relative feature gain CDSSM Convolutional Deep Structured Semantic Models BIBREF3 are used for sentencelevel semantic matching This is a dual encoder model that converts text strings sentences queries predicates entity mentions etc into their vector representations These models are trained using millions of Bing Query Title ClickThrough data Using the sourcemodel for vectorizing user query and targetmodel for vectorizing answer we compute the cosine similarity between these two vectors giving the relevance of answer corresponding to the query TFIDF Though sentencetovector models are trained on huge datasets they fail to effectively disambiguate KB specific data This is where a standard TFIDF BIBREF4 featurizer with local and global IDFs helps We extend the features for contextual ranking by modifying the candidate QAs and user query in these ways Querymodified Query Previous Answer For instance if user query is yes and the previous answer is do you want to know about XYZ the current query becomes do you want to know about XYZ yes Candidate QnA pairs are appended with its parent Questions and Answers no contextual information is used from the users query For instance if a candidate QnA has a question benefits and its parent question was know about XYZ the candidate QAs question is changed to know about XYZ benefits The features mentioned in Section SECREF20 are calculated for the above combinations also These features carry contextual information We use gradientboosted decision trees as our ranking model to combine all the features Early stopping BIBREF5 based on GeneralitytoProgress ratio is used to decide the number of step trees and Tolerant Pruning BIBREF6 helps prevent overfitting We follow incremental training if there is small changes in features or training data so that the score distribution is not changed drastically We add support for botdevelopers to directly enable handling chitchat queries like hi thank you whats up in their QnAMaker bots In addition to chitchat we also give bot developers the flexibility to ground responses for such queries in a specific personality professional witty friendly caring or enthusiastic For example the Humorous personality can be used for a casual bot whereas a Professional personality is more suited in case of banking FAQs or taskcompletion bots There is a list of 100 predefined intents BIBREF7 There is a curated list of queries for each of these intents along with a separate query understanding layer for ranking these intents The arbitration between chitchat answers and users knowledge base answers is handled by using a chatdomain classifier BIBREF8 The majority of the KBs are created using existing FAQ pages or manuals but to improve the quality it requires effort from the developers Active learning generates suggestions based on enduser feedback as well as rankers implicit signals For instance if for a query CDSSM feature was confident that one QnA should be ranked higher whereas wordnet feature thought other QnA should be ranked higher active learning system will try to disambiguate it by showing this as a suggestion to the bot developer To avoid showing similar suggestions to developers DBScan clustering is done which optimizes the number of suggestions shown QnAMaker is not domainspecific and can be used for any type of data To support this claim we measure our systems performance for datasets across various domains The evaluations are done by managed judges who understands the knowledge base and then judge user queries relevance to the QA pairs binary labels Each queryQA pair is judged by two judges We filter out data for which judges do not agree on the label Chitchat in itself can be considered as a domain Thus we evaluate performance on given KB both with and without chitchat data last two rows in Table TABREF19 as well as performance on just chitchat data 2nd row in Table TABREF19 Hybrid of deep learningCDSSM and machine learning features give our ranking model low computation cost high explainability and significant F1AUC score Based on QnAMaker usage we observed these trends Around 27 of the knowledge bases created use prebuilt personabased chitchat out of which sim 4 of the knowledge bases are created for chitchat alone The highest used personality is Professional which is used in 9 knowledge bases Around sim 25 developers have enabled active learning suggestions The acceptance to reject ratio for active learning suggestions is 031 255 of the knowledge bases use one URL as a source while creation sim 41 of the knowledge bases created use different sources like multiple URLs 1519 of the knowledge bases use both URL and editorial content as sources Rest use just editorial content We demonstrate QnAMaker a service to add a conversational layer over semistructured user data In addition to queryanswering we support novel features like personalitygrounded chitchat active learning based on userinteraction feedback Figure FIGREF40 and hierarchical extraction for multiturn conversations Figure FIGREF41 The goal of the demonstration will be to show how easy it is to create an intelligent bot using QnAMaker All the demonstrations will be done on the production website Demo Video can be seen here The system currently doesnt highlight the answer span and does not generate answers taking the KB as grounding We will be soon supporting Answer Span BIBREF9 and KBgrounded response generation BIBREF10 in QnAMaker We are also working on userdefined personas for chitchat automatically learned from userdocuments We aim to enhance our extraction to be able to work for any unstructured document as well as images We are also experimenting on improving our ranking system by using semantic vectorbased search as our retrieval and transformerbased models for reranking </s>
<s>[INST] A simple discriminative training method for machine translation with large-scale features [/INST] Since Och BIBREF0 proposed minimum error rate training MERT to exactly optimize objective evaluation measures MERT has become a standard model tuning technique in statistical machine translation SMT Though MERT performs better by improving its searching algorithm BIBREF1 BIBREF2 BIBREF3 BIBREF4 it does not work reasonably when there are lots of features As a result margin infused relaxed algorithms MIRA dominate in this case BIBREF6 BIBREF7 BIBREF8 BIBREF9 BIBREF10 In SMT MIRAs consider margin losses related to sentencelevel BLEUs However since the BLEU is not decomposable into each sentence these MIRA algorithms use some heuristics to compute the exact losses eg pseudodocument BIBREF8 and documentlevel loss BIBREF9 Recently another successful work in largescale feature tuning include force decoding basedBIBREF11 classification based BIBREF12 We aim to provide a simpler tuning method for largescale features than MIRAs Out motivation derives from an observation on MERT As MERT considers the quality of only top1 hypothesis set there might have morethanone set of parameters which have similar top1 performances in tuning but have very different topN hypotheses Empirically we expect an ideal model to benefit the total Nbest list That is better hypotheses should be assigned with higher ranks and this might decrease the error risk of top1 result on unseen data PlackettBIBREF13 offered an easytounderstand theory of modeling a permutation An Nbest list is assumedly generated by sampling without replacement The ith hypothesis to sample relies on those ranked after it instead of on the whole list This model also supports a partial permutation which accounts for top k positions in a list regardless of the remaining When taking k as 1 this model reduces to a standard conditional probabilistic training whose dual problem is actual the maximum entropy based BIBREF14 Although Och BIBREF0 substituted direct error optimization for a maximum entropy based training probabilistic models correlate with BLEU well when features are rich enough The similar claim also appears in BIBREF15 This also make the new method be applicable in largescale features PlackettLuce was firstly proposed to predict ranks of horses in gambling BIBREF13 Let mathbf rr1r2ldots rN be N horses with a probability distribution mathcal P on their abilities to win a game and a rank mathbf pi pi 1pi 2ldots pi mathbf pi of horses can be understood as a generative procedure where pi j denotes the index of the horse in the jth position In the 1st position there are N horses as candidates each of which rj has a probability prj to be selected Regarding the rank pi the probability of generating the champion is prpi 1 Then the horse rpi 1 is removed from the candidate pool In the 2nd position there are only N1 horses and their probabilities to be selected become prjZ2 where Z21prpi 1 is the normalization Then the runnerup in the rank pi the pi 2th horse is chosen at the probability prpi 2Z2 We use a consistent terminology Z1 in selecting the champion though Z1 equals 1 trivially This procedure iterates to the last rank in pi The key idea for the PlackettLuce model is the choice in the ith position in a rank mathbf pi only depends on the candidates not chosen at previous stages The probability of generating a rank pi is given as follows where Zj1sum t1j1prpi t We offer a toy example Table TABREF3 to demonstrate this procedure Theorem 1 The permutation probabilities pmathbf pi form a probability distribution over a set of permutations Omega pi For example for each mathbf pi in Omega pi we have pmathbf pi 0 and sum pi in Omega pi pmathbf pi 1 We have to note that Omega pi is not necessarily required to be completely ranked permutations in theory and in practice since gamblers might be interested in only the champion and runnerup and thus mathbf pi le N In experiments we would examine the effects on different length of permutations systems being termed PLpi Theorem 2 Given any two permutations mathbf pi and mathbf pi prime and they are different only in two positions p and q pq with pi pmathbf pi prime q and pi qmathbf pi prime p If ppi pppi q then ppi ppi prime In other words exchanging two positions in a permutation where the horse more likely to win is not ranked before the other would lead to an increase of the permutation probability This suggests the groundtruth permutation ranked decreasingly by their probabilities owns the maximum permutation probability on a given distribution In SMT we are motivated to optimize parameters to maximize the likelihood of groundtruth permutation of an Nbest hypotheses Due to the limitation of space see BIBREF13 BIBREF16 for the proofs of the theorems In SMT let mathbf ff1f2ldots denote source sentences and mathbf elbrace e11ldots rbrace lbrace e21ldots rbrace ldots denote target hypotheses A set of features are defined on both source and target side We refer to hei as a feature vector of a hypothesis from the ith source sentence and its score from a ranking function is defined as the inner product heiTw of the weight vector w and the feature vector We first follow the popular exponential style to define a parameterized probability distribution over a list of hypotheses The groundtruth permutation of an nbest list is simply obtained after ranking by their sentencelevel BLEUs Here we only concentrate on their relative ranks which are straightforward to compute in practice eg add 1 smoothing Let pi i be the groundtruth permutation of hypotheses from the ith source sentences and our optimization objective is maximizing the loglikelihood of the groundtruth permutations and penalized using a zeromean and unitvariance Gaussian prior This results in the following objective and gradient where Zij is defined as the Zj in Formula 1 of the ith source sentence The loglikelihood function is smooth differentiable and concave with the weight vector w and its local maximal solution is also a global maximum Iteratively selecting one parameter in alpha for tuning in a line search style or MERT style could also converge into the global global maximum BIBREF17 In practice we use more fast limitedmemory BFGS LBFGS algorithm BIBREF18 The loglikelihood of a PlackettLuce model is not a strict upper bound of the BLEU score however it correlates with BLEU well in the case of rich features The concept of rich is actually qualitative and obscure to define in different applications We empirically provide a formula to measure the richness in the scenario of machine translation The greater the richer In practice we find a rough threshold of r is 5 In engineering the size of an Nbest list with unique hypotheses is usually less than several thousands This suggests that if features are up to thousands or more the PlackettLuce model is quite suitable here Otherwise we could reduce the size of Nbest lists by sampling to make r beyond the threshold Their may be other efficient sampling methods and here we adopt a simple one If we want to m samples from a list of hypotheses mathbf e first the fracm3 best hypotheses and the fracm3 worst hypotheses are taken by their sentencelevel BLEUs Second we sample the remaining hypotheses on distribution peipropto exp heiTw where mathbf w is an initial weight from last iteration We compare our method with MERT and MIRA in two tasks iterative training and Nbest list rerank We do not list PRO BIBREF12 as our baseline as Cherry et alBIBREF10 have compared PRO with MIRA and MERT massively In the first task we align the FBIS data about 230K sentence pairs with GIZA and train a 4gram language model on the Xinhua portion of Gigaword corpus A hierarchical phrasebased HPB model Chiang 2007 is tuned on NIST MT 2002 and tested on MT 2004 and 2005 All features are eight basic ones BIBREF20 and extra 220 group features We design such feature templates to group grammars by the length of source side and target side feattypeale srcsidele bcle tgtsidele d where the feattype denotes any of the relative frequency reversed relative frequency lexical probability and reversed lexical probability and a b c d enumerate all possible subranges of 1 10 as the maximum length on both sides of a hierarchical grammar is limited to 10 There are 4 times 55 extra group features In the second task we rerank an Nbest list from a HPB system with 7491 features from a third party The system uses six million parallel sentence pairs available to the DARPA BOLT ChineseEnglish task This system includes 51 dense features translation probabilities provenance features etc and up to 7440 sparse features mostly lexical and fertilitybased The language model is a 6gram model trained on a 10 billion words including the English side of our parallel corpora plus other corpora such as Gigaword LDC2011T07 and Google News For the tuning and test sets we use 1275 and 1239 sentences respectively from the LDC2010E30 corpus We conduct a full training of machine translation models By default a decoder is invoked for at most 40 times and each time it outputs 200 hypotheses to be combined with those from previous iterations and sent into tuning algorithms In getting the groundtruth permutations there are many ties with the same sentencelevel BLEU and we just take one randomly In this section all systems have only around two hundred features hence in PlackettLuce based training we sample 30 hypotheses in an accumulative nbest list in each round of training All results are shown in Table TABREF10 we can see that all PLk systems does not perform well as MERT or MIRA in the development data this maybe due to that PLk systems do not optimize BLEU and the features here are relatively not enough compared to the size of Nbest lists empirical Formula DISPLAYFORM9 However PLk systems are better than MERT in testing PLk systems consider the quality of hypotheses from the 2th to the kth which is guessed to act the role of the margin like SVM in classification Interestingly MIRA wins first in training and still performs quite well in testing The PL1 system is equivalent to a maxentropy based algorithm BIBREF14 whose dual problem is actually maximizing the conditional probability of one oracle hypothesis When we increase the k the performances improve at first After reaching a maximum around k5 they decrease slowly We explain this phenomenon as this when features are rich enough higher BLEU scores could be easily fitted then longer groundtruth permutations include more useful information After being deduplicated the Nbest list has an average size of around 300 and with 7491 features Refer to Formula DISPLAYFORM9 this is ideal to use the PlackettLuce model Results are shown in Figure FIGREF12 We observe some interesting phenomena First the PlackettLuce models boost the training BLEU very greatly even up to 25 points higher than MIRA This verifies our assumption richer features benefit BLEU though they are optimized towards a different objective Second the overfitting problem of the PlackettLuce models PLk is alleviated with moderately large k In PL1 the overfitting is quite obvious the portion in which the curve overpasses MIRA is the smallest compared to other k and its convergent performance is below the baseline When k is not smaller than 5 the curves are almost above the MIRA line After 500 LBFGS iterations their performances are no less than the baseline though only by a small margin This experiment displays in largescale features the PlackettLuce model correlates with BLEU score very well and alleviates overfitting in some degree </s>
<s>[INST] Improving Spoken Language Understanding By Exploiting ASR N-best Hypotheses [/INST] Currently voicecontrolled smart devices are widely used in multiple areas to fulfill various tasks eg playing music acquiring weather information and booking tickets The SLU system employs several modules to enable the understanding of the semantics of the input speeches When there is an incoming speech the ASR module picks it up and attempts to transcribe the speech An ASR model could generate multiple interpretations for most speeches which can be ranked by their associated confidence scores Among the nbest hypotheses the top1 hypothesis is usually transformed to the NLU module for downstream tasks such as domain classification intent classification and named entity recognition slot tagging Multidomain NLU modules are usually designed hierarchically BIBREF0 For one incoming utterance NLU modules will firstly classify the utterance as one of many possible domains and the further analysis on intent classification and slot tagging will be domainspecific In spite of impressive development on the current SLU pipeline the interpretation of speech could still contain errors Sometimes the top1 recognition hypothesis of ASR module is ungrammatical or implausible and far from the groundtruth transcription BIBREF1 BIBREF2 Among those cases we find one interpretation exact matching with or more similar to transcription can be included in the remaining hypotheses 2nd nth To illustrate the value of the 2nd nth hypotheses we count the frequency of exact matching and more similar smaller edit distance compared to the 1st hypothesis to transcription for different positions of the nbest hypotheses list Table TABREF1 exhibits the results For the explored dataset we only collect the top 5 interpretations for each utterance n 5 Notably when the correct recognition exists among the 5 best hypotheses 50 of the time sum of the first rows percentages it occurs among the 2nd5th positions Moreover as shown by the second row in Table TABREF1 compared to the top recognition hypothesis the other hypotheses can sometimes be more similar to the transcription Over the past few years we have observed the success of reranking the nbest hypotheses BIBREF1 BIBREF3 BIBREF4 BIBREF5 BIBREF6 BIBREF7 BIBREF8 BIBREF9 BIBREF10 before feeding the best interpretation to the NLU module These approaches propose the reranking framework by involving morphological lexical or syntactic features BIBREF8 BIBREF9 BIBREF10 speech recognition features like confidence score BIBREF1 BIBREF4 and other features like number of tokens rank position BIBREF1 They are effective to select the best from the hypotheses list and reduce the word error rate WER BIBREF11 of speech recognition Those reranking models could benefit the first two cases in Table TABREF2 when there is an utterance matching with transcription However in other cases like the third row it is hard to integrate the fragmented information in multiple hypotheses This paper proposes various methods integrating nbest hypotheses to tackle the problem To the best of our knowledge this is the first study that attempts to collectively exploit the nbest speech interpretations in the SLU system This paper serves as the basis of our nbesthypothesesbased SLU system focusing on the methods of integration for the hypotheses Since further improvements of the integration framework require considerable setup and descriptions where jointly optimized tasks eg transcription reconstruction trained with multiple ways multitask BIBREF12 multistage learning BIBREF13 and more features confidence score rank position etc are involved we leave those to a subsequent article This paper is organized as follows Section SECREF2 introduces the Baseline Oracle and Direct models Section SECREF3 describes proposed ways to integrate nbest hypotheses during training The experimental setup and results are described in Section SECREF4 Section SECREF5 contains conclusions and future work The preliminary architecture is shown in Fig FIGREF4 For a given transcribed utterance it is firstly encoded with Byte Pair Encoding BPE BIBREF14 a compression algorithm splitting words to fundamental subword units pairs of bytes or BPs and reducing the embedded vocabulary size Then we use a BiLSTM BIBREF15 encoder and the output state of the BiLSTM is regarded as a vector representation for this utterance Finally a fully connected Feedforward Neural Network FNN followed by a softmax layer labeled as a multilayer perceptron MLP module is used to perform the domainintent classification task based on the vector For convenience we simplify the whole process in FigFIGREF4 as a mapping BM Baseline Mapping from the input utterance S to an estimated tags probability ptildet where ptildet leftarrow BMS The Baseline is trained on transcription and evaluated on ASR 1st best hypothesis StextASR 1st textbest The Oracle is trained on transcription and evaluated on transcription S textTranscription We name it Oracle simply because we assume that hypotheses are noisy versions of transcription Besides the Baseline and Oracle where only ASR 1best hypothesis is considered we also perform experiments to utilize ASR nbest hypotheses during evaluation The models evaluating with nbests and a BM pretrained on transcription are called Direct Models in Fig FIGREF7 Majority Vote We apply the BM model on each hypothesis independently and combine the predictions by picking the majority predicted label ie Music Sort by Score After parallel evaluation on all hypotheses sort the prediction by the corresponding confidence score and choose the one with the highest score ie Video Rerank Oracle Since the current rerank models eg BIBREF1 BIBREF3 BIBREF4 attempt to select the hypothesis most similar to transcription we propose the Rerank Oracle which picks the hypothesis with the smallest edit distance to transcription assume it is the ath best during evaluation and uses its corresponding prediction All the above mentioned models apply the BM trained on one interpretation transcription Their abilities to take advantage of multiple interpretations are actually not trained As a further step we propose multiple ways to integrate the nbest hypotheses during training The explored methods can be divided into two groups as shown in Fig FIGREF11 Let H1 H2 Hn denote all the hypotheses from ASR and bpHk i in BPs denotes the ith pair of bytes BP in the kth best hypothesis The model parameters associated with the two possible ways both contain embedding ebp for pairs of bytes BiLSTM parameters theta and MLP parameters W b The basic integration method Combined Sentence concatenates the nbest hypothesized text We separate hypotheses with a special delimiter SEP We assume BPE totally produces m BPs delimiters are not split during encoding Suppose the nth hypothesis has j pairs The entire model can be formulated as In Eqn DISPLAYFORM13 the connected hypotheses and separators are encoded via BiLSTM to a sequence of hidden state vectors Each hidden state vector eg h1 is the concatenation of forward h1f and backward h1b states The concatenation of the last state of the forward and backward LSTM forms the output vector of BiLSTM concatenation denoted as Then in Eqn DISPLAYFORM14 the MLP module defines the probability of a specific tag domain or intent tildet as the normalized activation sigma output after linear transformation of the output vector The concatenation of hypothesized text leverages the nbest list by transferring information among hypotheses in an embedding framework BiLSTM However since all the layers have access to both the preceding and subsequent information the embedding among nbests will influence each other which confuses the embedding and makes the whole framework sensitive to the noise in hypotheses As the second group of integration approaches we develop models PoolingAvgMax on the concatenation of hypothesis embedding which isolate the embedding process among hypotheses and summarize the features by a pooling layer For each hypothesis eg ith best in Eqn DISPLAYFORM16 with j pairs of bytes we could get a sequence of hidden states from BiLSTM and obtain its final output state by concatenating the first and last hidden state houtputi in Eqn DISPLAYFORM17 Then we stack all the output states vertically as shown in Eqn SECREF15 Note that in the real data we will not always have a fixed size of hypotheses list For a list with r n interpretations we get the embedding for each of them and pad with the embedding of the first best hypothesis until a fixed size n When rge n we only stack the top n embeddings We employ houtput1 for padding to enhance the influence of the top 1 hypothesis which is more reliable Finally one unified representation could be achieved via Pooling MaxAvg pooling with n by 1 sliding window and stride 1 on the concatenation and one score could be produced per possible tag for the given task We conduct our experiments on sim 87M annotated anonymised user utterances They are annotated and derived from requests across 23 domains Table TABREF24 shows the relative error reduction RErr of Baseline Oracle and our proposed models on the entire test set sim 300K utterances for multiclass domain classification We can see among all the direct methods predicting based on the hypothesis most similar to the transcription Rerank Oracle is the best As for the other models attempting to integrate the nbests during training PoolingAvg gets the highest relative improvement 1429 It as well turns out that all the integration methods outperform direct models drastically This shows that having access to nbest hypotheses during training is crucial for the quality of the predicted semantics To further detect the reason for improvements we split the test set into two parts based on whether ASR first best agrees with transcription and evaluate separately Comparing Table TABREF26 and Table TABREF27 obviously the benefits of using multiple hypotheses are mainly gained when ASR 1st best disagrees with the transcription When ASR 1st best agrees with transcription the proposed integration models can also keep the performance Under that condition we can still improve a little 356 because by introducing multiple ASR hypotheses we could have more information and when the transcriptionASR 1st best does not appear in the training sets transcriptions its nbests list may have similar hypotheses included in the training sets nbests Then our integration model trained on nbest hypotheses as well has clue to predict The series of comparisons reveal that our approaches integrating the hypotheses are robust to the ASR errors and whenever the ASR model makes mistakes we can outperform more significantly Among all the 23 domains we choose 8 popular domains for further comparisons between the Baseline and the best model of Table TABREF24 PoolingAvg Fig FIGREF29 exhibits the results We could find the PoolingAvg consistently improves the accuracy for all 8 domains In the previous experiments the number of utilized hypotheses for each utterance during evaluation is five which means we use the top 5 interpretations when the size of ASR recognition list is not smaller than 5 and use all the interpretations otherwise Changing the number of hypotheses while evaluation Fig FIGREF30 shows a monotonic increase with the access to more hypotheses for the PoolingAvg and PoolingMax Sort by Score is shown because it is the best achievable direct model while the Rerank Oracle is not realistic The growth becomes gentle after four hypotheses are leveraged Since another downstream task intent classification is similar to domain classification we just show the best model in domain classification PoolingAvg on domainspecific intent classification for three popular domains due to space limit As Table TABREF32 shows the margins of using multiple hypotheses with PoolingAvg are significant as well This paper improves the SLU system robustness to ASR errors by integrating nbest hypotheses in different ways eg the aggregation of predictions from hypotheses or the concatenation of hypothesis text or embedding We can achieve significant classification accuracy improvements over productionquality baselines on domain and intent classifications 14 to 25 relative gains The improvement is more significant for a subset of testing data where ASR first best is different from transcription We also observe that with more hypotheses utilized the performance can be further improved In the future we aim to employ additional features eg confidence scores for hypotheses or tokens to integrate nbests more efficiently where we can train a function f to obtain a weight for each hypothesis embedding before pooling Another direction is using deep learning framework to embed the word lattice BIBREF16 or confusion network BIBREF17 BIBREF18 which can provide a compact representation of multiple hypotheses and more information like times in the SLU system We would like to thank Junghoo John Cho for proofreading </s>
<s>[INST] DisSim: A Discourse-Aware Syntactic Text Simplification Frameworkfor English and German [/INST] We developed a syntactic text simplification TS approach that can be used as a preprocessing step to facilitate and improve the performance of a wide range of artificial intelligence AI tasks such as Machine Translation Information Extraction IE or Text Summarization Since shorter sentences are generally better processed by natural language processing NLP systems BIBREF0 the goal of our approach is to break down a complex source sentence into a set of minimal propositions ie a sequence of sound selfcontained utterances with each of them presenting a minimal semantic unit that cannot be further decomposed into meaningful propositions BIBREF1 However any sound and coherent text is not simply a loose arrangement of selfcontained units but rather a logical structure of utterances that are semantically connected BIBREF2 Consequently when carrying out syntactic simplification operations without considering discourse implications the rewriting may easily result in a disconnected sequence of simplified sentences that lack important contextual information making the text harder to interpret Thus in order to preserve the coherence structure and hence the interpretability of the input we developed a discourseaware TS approach based on Rhetorical Structure Theory RST BIBREF3 It establishes a contextual hierarchy between the split components and identifies and classifies the semantic relationship that holds between them In that way a complex source sentence is turned into a socalled discourse tree consisting of a set of hierarchically ordered and semantically interconnected sentences that present a simplified syntax which is easier to process for downstream semantic applications and may support a faster generalization in machine learning tasks We present DisSim a discourseaware sentence splitting approach for English and German that creates a semantic hierarchy of simplified sentences It takes a sentence as input and performs a recursive transformation process that is based upon a small set of 35 handcrafted grammar rules for the English version and 29 rules for the German approach These patterns were heuristically determined in a comprehensive linguistic analysis and encode syntactic and lexical features that can be derived from a sentences parse tree Each rule specifies 1 how to split up and rephrase the input into structurally simplified sentences and 2 how to set up a semantic hierarchy between them They are recursively applied on a given source sentence in a topdown fashion When no more rule matches the algorithm stops and returns the generated discourse tree In a first step source sentences that present a complex linguistic form are turned into clean compact structures by decomposing clausal and phrasal components For this purpose the transformation rules encode both the splitting points and rephrasing procedure for reconstructing proper sentences Each split will create two or more sentences with a simplified syntax To establish a semantic hierarchy between them two subtasks are carried out First we set up a contextual hierarchy between the split sentences by connecting them with information about their hierarchical level similar to the concept of nuclearity in RST For this purpose we distinguish core sentences nuclei which carry the key information of the input from accompanying contextual sentences satellites that disclose additional information about it To differentiate between those two types of constituents the transformation patterns encode a simple syntaxbased approach where subordinate clausesphrases are classified as context sentences while superordinate as well as coordinate clausesphrases are labelled as core Second we aim to restore the semantic relationship between the disembedded components For this purpose we identify and classify the rhetorical relations that hold between the simplified sentences making use of both syntactic features which are derived from the inputs parse tree structure and lexical features in the form of cue phrases Following the work of Taboada13 they are mapped to a predefined list of rhetorical cue words to infer the type of rhetorical relation DisSim can be either used as a Java API imported as a Maven dependency or as a service which we provide through a command line interface or a RESTlike web service that can be deployed via docker It takes as input NL text in the form of a single sentence Alternatively a file containing a sequence of sentences can be loaded The result of the transformation process is either written to the console or stored in a specified output file in JSON format We also provide a browserbased user interface where the user can directly type in sentences to be processed see Figure FIGREF1 For the English version we performed both a thorough manual analysis and automatic evaluation across three commonly used TS datasets from two different domains in order to assess the performance of our framework with regard to the sentence splitting subtask The results show that our proposed sentence splitting approach outperforms the state of the art in structural TS returning finegrained simplified sentences that achieve a high level of grammaticality and preserve the meaning of the input The full evaluation methodology and detailed results are reported in niklausetal2019transforming In addition a comparative analysis with the annotations contained in the RST Discourse Treebank BIBREF6 demonstrates that we are able to capture the contextual hierarchy between the split sentences with a precision of almost 90 and reach an average precision of approximately 70 for the classification of the rhetorical relations that hold between them The evaluation of the German version is in progress An extrinsic evaluation was carried out on the task of Open IE BIBREF7 It revealed that when applying DisSim as a preprocessing step the performance of stateoftheart Open IE systems can be improved by up to 346 in precision and 52 in recall ie leading to a lower information loss and a higher accuracy of the extracted relations For details the interested reader may refer to niklausetal2019transforming Moreover most current Open IE approaches output only a loose arrangement of extracted tuples that are hard to interpret as they ignore the context under which a proposition is complete and correct and thus lack the expressiveness needed for a proper interpretation of complex assertions BIBREF8 As illustrated in Figure FIGREF9 with the help of the semantic hierarchy generated by our discourseaware sentence splitting approach the output of Open IE systems can be easily enriched with contextual information that allows to restore the semantic relationship between a set of propositions and hence preserve their interpretability in downstream tasks We developed and implemented a discourseaware syntactic TS approach that recursively splits and rephrases complex English or German sentences into a semantic hierarchy of simplified sentences The resulting lightweight semantic representation can be used to facilitate and improve a variety of AI tasks </s>
<s>[INST] Learning Word Embeddings from the Portuguese Twitter Stream: A Study of some Practical Aspects [/INST] Word embeddings have great practical importance since they can be used as precomputed highdensity features to ML models significantly reducing the amount of training data required in a variety of NLP tasks However there are several interrelated challenges with computing and consistently distributing word embeddings concerning the Not only the space of possibilities for each of these aspects is large there are also challenges in performing a consistent largescale evaluation of the resulting embeddings BIBREF0 This makes systematic experimentation of alternative wordembedding configurations extremely difficult In this work we make progress in trying to find good combinations of some of the previous parameters We focus specifically in the task of computing word embeddings for processing the Portuguese Twitter stream Usergenerated content such as twitter messages tends to be populated by words that are specific to the medium and that are constantly being added by users These dynamics pose challenges to NLP systems which have difficulties in dealing with out of vocabulary words Therefore learning a semantic representation for those words directly from the usergenerated stream and as the words arise would allow us to keep up with the dynamics of the medium and reduce the cases for which we have no information about the words Starting from our own implementation of a neural word embedding model which should be seen as a flexible baseline model for further experimentation our research tries to answer the following practical questions By answering these questions based on a reasonably small sample of Twitter data 5M we hope to find the best way to proceed and train embeddings for Twitter vocabulary using the much larger amount of Twitter data available 300M but for which parameter experimentation would be unfeasible This work can thus be seen as a preparatory study for a subsequent attempt to produce and distribute a largescale database of embeddings for processing Portuguese Twitter data There are several approaches to generating word embeddings One can build models that explicitly aim at generating word embeddings such as Word2Vec or GloVe BIBREF1 BIBREF2 or one can extract such embeddings as byproducts of more general models which implicitly compute such word embeddings in the process of solving other language tasks Word embeddings methods aim to represent words as real valued continuous vectors in a much lower dimensional space when compared to traditional bagofwords models Moreover this low dimensional space is able to capture lexical and semantic properties of words Cooccurrence statistics are the fundamental information that allows creating such representations Two approaches exist for building word embeddings One creates a low rank approximation of the word cooccurrence matrix such as in the case of Latent Semantic Analysis BIBREF3 and GloVe BIBREF2 The other approach consists in extracting internal representations from neural network models of text BIBREF4 BIBREF5 BIBREF1 Levy and Goldberg BIBREF6 showed that the two approaches are closely related Although word embeddings research go back several decades it was the recent developments of Deep Learning and the word2vec framework BIBREF1 that captured the attention of the NLP community Moreover Mikolov et al BIBREF7 showed that embeddings trained using word2vec models CBOW and Skipgram exhibit linear structure allowing analogy questions of the form manwomanking and can boost performance of several text classification tasks One of the issues of recent work in training word embeddings is the variability of experimental setups reported For instance in the paper describing GloVe BIBREF2 authors trained their model on five corpora of different sizes and built a vocabulary of 400K most frequent words Mikolov et al BIBREF7 trained with 82K vocabulary while Mikolov et al BIBREF1 was trained with 3M vocabulary Recently Arora et al BIBREF8 proposed a generative model for learning embeddings that tries to explain some theoretical justification for nonlinear models eg word2vec and GloVe and some hyper parameter choices Authors evaluated their model using 68K vocabulary SemEval 2016Task 4 Sentiment Analysis in Twitter organizers report that participants either used general purpose pretrained word embeddings or trained from Tweet 2016 dataset or from some sort of dataset BIBREF9 However participants neither report the size of vocabulary used neither the possible effect it might have on the task specific results Recently Rodrigues et al BIBREF10 created and distributed the first general purpose embeddings for Portuguese Word2vec gensim implementation was used and authors report results with different values for the parameters of the framework Furthermore authors used experts to translate well established word embeddings test sets for Portuguese language which they also made publicly available and we use some of those in this work The neural word embedding model we use in our experiments is heavily inspired in the one described in BIBREF4 but ours is one layer deeper and is set to solve a slightly different word prediction task Given a sequence of 5 words INLINEFORM0 INLINEFORM1 INLINEFORM2 INLINEFORM3 INLINEFORM4 the task the model tries to perform is that of predicting the middle word INLINEFORM5 based on the two words on the left INLINEFORM6 INLINEFORM7 and the two words on the right INLINEFORM8 INLINEFORM9 INLINEFORM10 This should produce embeddings that closely capture distributional similarity so that words that belong to the same semantic class or which are synonyms and antonyms of each other will be embedded in close regions of the embedding hyperspace Our neural model is composed of the following layers All neural activations in the model are sigmoid functions The model was implemented using the Syntagma library which relies on Keras BIBREF11 for model development and we train the model using the builtin ADAM BIBREF12 optimizer with the default parameters We are interested in assessing two aspects of the word embedding process On one hand we wish to evaluate the semantic quality of the produced embeddings On the other we want to quantify how much computational power and training data are required to train the embedding model as a function of the size of the vocabulary INLINEFORM0 we try to embed These aspects have fundamental practical importance for deciding how we should attempt to produce the largescale database of embeddings we will provide in the future All resources developed in this work are publicly available Apart from the size of the vocabulary to be processed INLINEFORM0 the hyperparamaters of the model that we could potentially explore are i the dimensionality of the input word embeddings and ii the dimensionality of the output word embeddings As mentioned before we set both to 64 bits after performing some quick manual experimentation Full hyperparameter exploration is left for future work Our experimental testbed comprises a desktop with a nvidia TITAN X Pascal Intel Core Quad i7 3770K 35Ghz 32 GB DDR3 RAM and a 180GB SSD drive We randomly sampled 5M tweets from a corpus of 300M tweets collected from the Portuguese Twitter community BIBREF13 The 5M comprise a total of 614M words approx 12 words per tweets in average From those 5M tweets we generated a database containing 189M distinct 5grams along with their frequency counts In this process all text was downcased To help anonymizing the ngram information we substituted all the twitter handles by an artificial token THANDLE We also substituted all HTTP links by the token LINK We prepended two special tokens to complete the 5grams generated from the first two words of the tweet and we correspondingly appended two other special tokens to complete 5grams centered around the two last tokens of the tweet Tokenization was perform by trivially separating tokens by blank space No linguistic preprocessing such as for example separating punctuation from words was made We opted for not doing any preprocessing for not introducing any linguistic bias from another tool tokenization of user generated content is not a trivial problem The most direct consequence of not performing any linguistic preprocessing is that of increasing the vocabulary size and diluting token counts However in principle and given enough data the embedding model should be able to learn the correct embeddings for both actual words eg ronaldo and the words that have punctuation attached eg ronaldo In practice we believe that this can actually be an advantage for the downstream consumers of the embeddings since they can also relax the requirements of their own tokenization stage Overall the dictionary thus produced contains approximately 13M distinct entries Our dictionary was sorted by frequency so the words with lowest index correspond to the most common words in the corpus We used the information from the 5gram database to generate all training data used in the experiments For a fixed size INLINEFORM0 of the target vocabulary to be embedded eg INLINEFORM1 2048 we scanned the database to obtain all possible 5grams for which all tokens were among the top INLINEFORM2 words of the dictionary ie the top INLINEFORM3 most frequent words in the corpus Depending on INLINEFORM4 different numbers of valid training 5grams were found in the database the larger INLINEFORM5 the more valid 5grams would pass the filter The number of examples collected for each of the values of INLINEFORM6 is shown in Table TABREF16 Since one of the goals of our experiments is to understand the impact of using different amounts of training data for each size of vocabulary to be embedded INLINEFORM0 we will run experiments training the models using 25 50 75 and 100 of the data available We tracked metrics related to the learning process itself as a function of the vocabulary size to be embedded INLINEFORM0 and of the fraction of training data used 25 50 75 and 100 For all possible configurations we recorded the values of the training and validation loss cross entropy after each epoch Tracking these metrics serves as a minimalistic sanity check if the model is not able to solve the word prediction task with some degree of success eg if we observe no substantial decay in the losses then one should not expect the embeddings to capture any of the distributional information they are supposed to capture Using the gold standard data described below we performed three types of tests Class Membership Tests embeddings corresponding two member of the same semantic class eg Months of the Year Portuguese Cities Smileys should be close since they are supposed to be found in mostly the same contexts Class Distinction Test this is the reciprocal of the previous Class Membership test Embeddings of elements of different classes should be different since words of different classes ere expected to be found in significantly different contexts Word Equivalence Test embeddings corresponding to synonyms antonyms abbreviations eg porque abbreviated by pq and partial references eg slb and benfica should be almost equal since both alternatives are supposed to be used be interchangeable in all contexts either maintaining or inverting the meaning Therefore in our tests two words are considered distinct if the cosine of the corresponding embeddings is lower than 070 or 080 to belong to the same class if the cosine of their embeddings is higher than 070 or 080 equivalent if the cosine of the embeddings is higher that 085 or 095 We report results using different thresholds of cosine similarity as we noticed that cosine similarity is skewed to higher values in the embedding space as observed in related work BIBREF14 BIBREF15 We used the following sources of data for testing Class Membership APBattig data This data was collected from the evaluation data provided by BIBREF10 These correspond to 29 semantic classes TwitterClass collected manually by the authors by checking top most frequent words in the dictionary and then expanding the classes These include the following 6 sets number of elements in brackets smileys 13 months 12 countries 6 names 19 surnames 14 Portuguese cities 9 For the Class Distinction test we pair each element of each of the gold standard classes with all the other elements from other classes removing duplicate pairs since ordering does not matter and we generate pairs of words which are supposed belong to different classes For Word Equivalence test we manually collected equivalente pairs focusing on abbreviations that are popular in Twitters eg qt INLINEFORM0 quanto or lx INLINEFORM1 lisboa and on frequent acronyms eg slb INLINEFORM2 benfica In total we compiled 48 equivalence pairs For all these tests we computed a coverage metric Our embeddings do not necessarily contain information for all the words contained in each of these tests So for all tests we compute a coverage metric that measures the fraction of the goldstandard pairs that could actually be tested using the different embeddings produced Then for all the test pairs actually covered we obtain the success metrics for each of the 3 tests by computing the ratio of pairs we were able to correctly classified as i being distinct cosine INLINEFORM0 07 or 08 ii belonging to the same class cosine INLINEFORM1 07 or 08 and iii being equivalent cosine INLINEFORM2 085 or 095 It is worth making a final comment about the gold standard data Although we do not expect this gold standard data to be sufficient for a widespectrum evaluation of the resulting embeddings it should be enough for providing us clues regarding areas where the embedding process is capturing enough semantics and where it is not These should still provide valuable indications for planning how to produce the much larger database of word embeddings We run the training process and performed the corresponding evaluation for 12 combinations of size of vocabulary to be embedded and the volume of training data available that has been used Table TABREF27 presents some overall statistics after training for 40 epochs The average time per epoch increases first with the size of the vocabulary to embed INLINEFORM0 because the model will have more parameters and then for each INLINEFORM1 with the volume of training data Using our testbed Section SECREF4 the total time of learning in our experiments varied from a minimum of 160 seconds with INLINEFORM2 2048 and 25 of data to a maximum of 225 hours with INLINEFORM3 32768 and using 100 of the training data available extracted from 5M tweets These numbers give us an approximate figure of how time consuming it would be to train embeddings from the complete Twitter corpus we have consisting of 300M tweets We now analyze the learning process itself We plot the training set loss and validation set loss for the different values of INLINEFORM0 Figure FIGREF28 left with 40 epochs and using all the available data As expected the loss is reducing after each epoch with validation loss although being slightly higher following the same trend When using 100 we see no model overfitting We can also observe that the higher is INLINEFORM1 the higher are the absolute values of the loss sets This is not surprising because as the number of words to predict becomes higher the problem will tend to become harder Also because we keep the dimensionality of the embedding space constant 64 dimensions it becomes increasingly hard to represent and differentiate larger vocabularies in the same hypervolume We believe this is a specially valuable indication for future experiments and for deciding the dimensionality of the final embeddings to distribute On the right side of Figure FIGREF28 we show how the number of training and validation examples affects the loss For a fixed INLINEFORM0 32768 we varied the amount of data used for training from 25 to 100 Three trends are apparent As we train with more data we obtain better validation losses This was expected The second trend is that by using less than 50 of the data available the model tends to overfit the data as indicated by the consistent increase in the validation loss after about 15 epochs check dashed lines in right side of Figure FIGREF28 This suggests that for the future we should not try any drastic reduction of the training data to save training time Finally when not overfitting the validation loss seems to stabilize after around 20 epochs We observed no phasetransition effects the model seems simple enough for not showing that type of behavior This indicates we have a practical way of safely deciding when to stop training the model Table TABREF30 presents results for the three different tests described in Section SECREF4 The first expected result is that the coverage metrics increase with the size of the vocabulary being embedded ie INLINEFORM0 Because the Word Equivalence test set was specifically created for evaluating Twitterbased embedding when embedding INLINEFORM1 32768 words we achieve almost 90 test coverage On the other hand for the Class Distinction test set which was created by doing the cross product of the test cases of each class in Class Membership test set we obtain very low coverage figures This indicates that it is not always possible to reuse previously compiled goldstandard data and that it will be important to compile goldstandard data directly from Twitter content if we want to perform a more precise evaluation The effect of varying the cosine similarity decision threshold from 070 to 080 for Class Membership test shows that the percentage of classified as correct test cases drops significantly However the drop is more accentuated when training with only a portion of the available data The differences of using two alternative thresholds values is even higher in the Word Equivalence test The Word Equivalence test in which we consider two words equivalent word if the cosine of the embedding vectors is higher than 095 revealed to be an extremely demanding test Nevertheless for INLINEFORM0 32768 the results are far superior and for a much larger coverage than for lower INLINEFORM1 The same happens with the Class Membership test On the other hand the Class Distinction test shows a different trend for larger values of INLINEFORM0 32768 but the coverage for other values of INLINEFORM1 is so low that becomes difficult to hypothesize about the reduced values of True Negatives TN percentage obtained for the largest INLINEFORM2 It would be necessary to confirm this behavior with even larger values of INLINEFORM3 One might hypothesize that the ability to distinguish between classes requires larger thresholds when INLINEFORM4 is large Also we can speculate about the need of increasing the number of dimensions to be able to encapsulate different semantic information for so many words Despite already providing interesting practical clues for our goal of trying to embed a larger vocabulary using more of the training data we have available these results also revealed that the intrinsic evaluation metrics we are using are overly sensitive to their corresponding cosine similarity thresholds This sensitivity poses serious challenges for further systematic exploration of word embedding architectures and their corresponding hyperparameters which was also observed in other recent works BIBREF15 By using these absolute thresholds as criteria for deciding similarity of words we create a dependency between the evaluation metrics and the geometry of the embedded data If we see the embedding data as a graph this means that metrics will change if we apply scaling operations to certain parts of the graph even if its structure ie relative position of the embedded words does not change For most practical purposes including training downstream ML models absolute distances have little meaning What is fundamental is that the resulting embeddings are able to capture topological information similar words should be closer to each other than they are to words that are dissimilar to them under the various criteria of similarity we care about independently of the absolute distances involved It is now clear that a key aspect for future work will be developing additional performance metrics based on topological properties We are in line with recent work BIBREF16 proposing to shift evaluation from absolute values to more exploratory evaluations focusing on weaknesses and strengths of the embeddings and not so much in generic scores For example one metric could consist in checking whether for any given word all words that are known to belong to the same class are closer than any words belonging to different classes independently of the actual cosine Future work will necessarily include developing this type of metrics Producing word embeddings from tweets is challenging due to the specificities of the vocabulary in the medium We implemented a neural word embedding model that embeds words based on ngram information extracted from a sample of the Portuguese Twitter stream and which can be seen as a flexible baseline for further experiments in the field Work reported in this paper is a preliminary study of trying to find parameters for training word embeddings from Twitter and adequate evaluation tests and goldstandard data Results show that using less than 50 of the available training examples for each vocabulary size might result in overfitting The resulting embeddings obtain an interesting performance on intrinsic evaluation tests when trained a vocabulary containing the 32768 most frequent words in a Twitter sample of relatively small size Nevertheless results exhibit a skewness in the cosine similarity scores that should be further explored in future work More specifically the Class Distinction test set revealed to be challenging and opens the door to evaluation of not only similarity between words but also dissimilarities between words of different semantic classes without using absolute score values Therefore a key area of future exploration has to do with better evaluation resources and metrics We made some initial effort in this front However we believe that developing new intrinsic tests agnostic to absolute values of metrics and concerned with topological aspects of the embedding space and expanding goldstandard data with cases tailored for usergenerated content is of fundamental importance for the progress of this line of work Furthermore we plan to make public available word embeddings trained from a large sample of 300M tweets collected from the Portuguese Twitter stream This will require experimenting producing embeddings with higher dimensionality to avoid the cosine skewness effect and training with even larger vocabularies Also there is room for experimenting with some of the hyperparameters of the model itself eg activation functions dimensions of the layers which we know have impact on final results </s>
<s>[INST] Procedural Reasoning Networks for Understanding Multimodal Procedures [/INST] A great deal of commonsense knowledge about the world we live is procedural in nature and involves steps that show ways to achieve specific goals Understanding and reasoning about procedural texts eg cooking recipes howto guides scientific processes are very hard for machines as it demands modeling the intrinsic dynamics of the procedures BIBREF0 BIBREF1 BIBREF2 That is one must be aware of the entities present in the text infer relations among them and even anticipate changes in the states of the entities after each action For example consider the cheeseburger recipe presented in Fig FIGREF2 The instruction salt and pepper each patty and cook for 2 to 3 minutes on the first side in Step 5 entails mixing three basic ingredients the ground beef salt and pepper together and then applying heat to the mix which in turn causes chemical changes that alter both the appearance and the taste From a natural language understanding perspective the main difficulty arises when a model sees the word patty again at a later stage of the recipe It still corresponds to the same entity but its form is totally different Over the past few years many new datasets and approaches have been proposed that address this inherently hard problem BIBREF0 BIBREF1 BIBREF3 BIBREF4 To mitigate the aforementioned challenges the existing works rely mostly on heavy supervision and focus on predicting the individual state changes of entities at each step Although these models can accurately learn to make local predictions they may lack global consistency BIBREF3 BIBREF4 not to mention that building such annotated corpora is very laborintensive In this work we take a different direction and explore the problem from a multimodal standpoint Our basic motivation as illustrated in Fig FIGREF2 is that accompanying images provide complementary cues about causal effects and state changes For instance it is quite easy to distinguish raw meat from cooked one in visual domain In particular we take advantage of recently proposed RecipeQA dataset BIBREF2 a dataset for multimodal comprehension of cooking recipes and ask whether it is possible to have a model which employs dynamic representations of entities in answering questions that require multimodal understanding of procedures To this end inspired from BIBREF5 we propose Procedural Reasoning Networks PRN that incorporates entities into the comprehension process and allows to keep track of entities understand their interactions and accordingly update their states across time We report that our proposed approach significantly improves upon previously published results on visual reasoning tasks in RecipeQA which test understanding causal and temporal relations from images and text We further show that the dynamic entity representations can capture semantics of the state information in the corresponding steps In our study we particularly focus on the visual reasoning tasks of RecipeQA namely visual cloze visual coherence and visual ordering tasks each of which examines a different reasoning skill We briefly describe these tasks below Visual Cloze In the visual cloze task the question is formed by a sequence of four images from consecutive steps of a recipe where one of them is replaced by a placeholder A model should select the correct one from a multiplechoice list of four answer candidates to fill in the missing piece In that regard the task inherently requires aligning visual and textual information and understanding temporal relationships between the cooking actions and the entities Visual Coherence The visual coherence task tests the ability to identify the image within a sequence of four images that is inconsistent with the text instructions of a cooking recipe To succeed in this task a model should have a clear understanding of the procedure described in the recipe and at the same time connect language and vision Visual Ordering The visual ordering task is about grasping the temporal flow of visual events with the help of the given recipe text The questions show a set of four images from the recipe and the task is to sort jumbled images into the correct order Here a model needs to infer the temporal relations between the images and align them with the recipe steps In the following we explain our Procedural Reasoning Networks model Its architecture is based on a bidirectional attention flow BiDAF model BIBREF6 but also equipped with an explicit reasoning module that acts on entityspecific relational memory units Fig FIGREF4 shows an overview of the network architecture It consists of five main modules An input module an attention module a reasoning module a modeling module and an output module Note that the question answering tasks we consider here are multimodal in that while the context is a procedural text the question and the multiple choice answers are composed of images Input Module extracts vector representations of inputs at different levels of granularity by using several different encoders Reasoning Module scans the procedural text and tracks the states of the entities and their relations through a recurrent relational memory core unit BIBREF5 Attention Module computes contextaware query vectors and queryaware context vectors as well as queryaware memory vectors Modeling Module employs two multilayered RNNs to encode previous layers outputs Output Module scores a candidate answer from the given multiplechoice list At a high level as the model is reading the cooking recipe it continually updates the internal memory representations of the entities ingredients based on the content of each step it keeps track of changes in the states of the entities providing an entitycentric summary of the recipe The response to a question and a possible answer depends on the representation of the recipe text as well as the last states of the entities All this happens in a series of implicit relational reasoning steps and there is no need for explicitly encoding the state in terms of a predefined vocabulary Let the triple mathbf Rmathbf Qmathbf A be a sample input Here mathbf R denotes the input recipe which contains textual instructions composed of N words in total mathbf Q represents the question that consists of a sequence of M images mathbf A denotes an answer that is either a single image or a series of L images depending on the reasoning task In particular for the visual cloze and the visual coherence type questions the answer contains a single image L1 and for the visual ordering task it includes a sequence We encode the input recipe mathbf R at character word and step levels Characterlevel embedding layer uses a convolutional neural network namely CharCNN model by BIBREF7 which outputs character level embeddings for each word and alleviates the issue of outofvocabulary OOV words In word embedding layer we use a pretrained GloVe model BIBREF8 and extract wordlevel embeddings The concatenation of the character and the word embeddings are then fed to a twolayer highway network BIBREF10 to obtain a contextual embedding for each word in the recipe This results in the matrix mathbf Rprime in mathbb R2d times N On top of these layers we have another layer that encodes the steps of the recipe in an individual manner Specifically we obtain a steplevel contextual embedding of the input recipe containing T steps as mathcal Smathbf s1mathbf s2dots mathbf sT where mathbf si represents the final state of a BiLSTM encoding the ith step of the recipe obtained from the character and wordlevel embeddings of the tokens exist in the corresponding step We represent both the question mathbf Q and the answer mathbf A in terms of visual embeddings Here we employ a pretrained ResNet50 model BIBREF11 trained on ImageNet dataset BIBREF12 and represent each image as a realvalued 2048d vector using features from the penultimate averagepool layer Then these embeddings are passed first to a multilayer perceptron MLP and then its outputs are fed to a BiLSTM We then form a matrix mathbf Qprime in mathbb R2d times M for the question by concatenating the cell states of the BiLSTM For the visual ordering task to represent the sequence of images in the answer with a single vector we additionally use a BiLSTM and define the answering embedding by the summation of the cell states of the BiLSTM Finally for all tasks these computations produce answer embeddings denoted by mathbf a in mathbb R2d times 1 As mentioned before comprehending a cooking recipe is mostly about entities basic ingredients and actions cooking activities described in the recipe instructions Each action leads to changes in the states of the entities which usually affects their visual characteristics A change rarely occurs in isolation in most cases the action affects multiple entities at once Hence in our reasoning module we have an explicit memory component implemented with relational memory units BIBREF5 This helps us to keep track of the entities their state changes and their relations in relation to each other over the course of the recipe see Fig FIGREF14 As we will examine in more detail in Section SECREF4 it also greatly improves the interpretability of model outputs Specifically we set up the memory with a memory matrix mathbf E in mathbb RdE times K by extracting K entities ingredients from the first step of the recipe We initialize each memory cell mathbf ei representing a specific entity by its CharCNN and pretrained GloVe embeddings From now on we will use the terms memory cells and entities interchangeably throughout the paper Since the input recipe is given in the form of a procedural text decomposed into a number of steps we update the memory cells after each step reflecting the state changes happened on the entities This update procedure is modelled via a relational recurrent neural network RRNN recently proposed by BIBREF5 It is built on a 2dimensional LSTM model whose matrix of cell states represent our memory matrix mathbf E Here each row i of the matrix mathbf E refers to a specific entity mathbf ei and is updated after each recipe step t as follows where mathbf st denotes the embedding of recipe step t and mathbf phi itmathbf hitmathbf eit is the cell state of the RRNN at step t with mathbf hit and mathbf eit being the ith row of the hidden state of the RRNN and the dynamic representation of entity mathbf ei at the step t respectively The RRNN model exploits a multiheaded selfattention mechanism BIBREF13 that allows memory cells to interact with each other and attend multiple locations simultaneously during the update phase In Fig FIGREF14 we illustrate how this interaction takes place in our relational memory module by considering a sample cooking recipe and by presenting how the attention matrix changes throughout the recipe In particular the attention matrix at a specific time shows the attention flow from one entity memory cell to another along with the attention weights to the corresponding recipe step offset column The color intensity shows the magnitude of the attention weights As can be seen from the figure the internal representations of the entities are actively updated at each step Moreover as argued in BIBREF5 this can be interpreted as a form of relational reasoning as each update on a specific memory cell is operated in relation to others Here we should note that it is often difficult to make sense of these attention weights However we observe that the attention matrix changes very gradually near the completion of the recipe Attention module is in charge of linking the question with the recipe text and the entities present in the recipe It takes the matrices mathbf Qprime and mathbf Rprime from the input module and mathbf E from the reasoning module and constructs the questionaware recipe representation mathbf G and the questionaware entity representation mathbf Y Following the attention flow mechanism described in BIBREF14 we specifically calculate attentions in four different directions 1 from question to recipe 2 from recipe to question 3 from question to entities and 4 from entities to question The first two of these attentions require computing a shared affinity matrix mathbf SR in mathbb RN times M with mathbf SRij indicating the similarity between ith recipe word and jth image in the question estimated by where mathbf wtop R is a trainable weight vector circ and denote elementwise multiplication and concatenation operations respectively Recipetoquestion attention determines the images within the question that is most relevant to each word of the recipe Let mathbf tildeQ in mathbb R2d times N represent the recipetoquestion attention matrix with its ith column being given by mathbf tildeQisum j mathbf aijmathbf Qprime j where the attention weight is computed by mathbf aioperatornamesoftmaxmathbf SRi in mathbb RM Questiontorecipe attention signifies the words within the recipe that have the closest similarity to each image in the question and construct an attended recipe vector given by tildemathbf rsum imathbf bimathbf Rprime i with the attention weight is calculated by mathbf boperatornamesoftmaxoperatornamemaxmathit colmathbf SR in mathbb RN where operatornamemaxmathit col denotes the maximum function across the column The questiontorecipe matrix is then obtained by replicating tildemathbf r N times across the column giving tildemathbf R in mathbb R2d times N Then we construct the question aware representation of the input recipe mathbf G with its ith column mathbf Gi in mathbb R8d times N denoting the final embedding of ith word given by Attentions from question to entities and from entities to question are computed in a way similar to the ones described above The only difference is that it uses a different shared affinity matrix to be computed between the memory encoding entities mathbf E and the question mathbf Qprime These attentions are then used to construct the question aware representation of entities denoted by mathbf Y that links and integrates the images in the question and the entities in the input recipe Modeling module takes the questionaware representations of the recipe mathbf G and the entities mathbf Y and forms their combined vector representation For this purpose we first use a twolayer BiLSTM to read the questionaware recipe mathbf G and to encode the interactions among the words conditioned on the question For each direction of BiLSTM we use its hidden state after reading the last token as its output In the end we obtain a vector embedding mathbf c in mathbb R2d times 1 Similarly we employ a second BiLSTM this time over the entities mathbf Y which results in another vector embedding mathbf f in mathbb R2dE times 1 Finally these vector representations are concatenated and then projected to a fixed size representation using mathbf ovarphi oleftmathbf c mathbf fright in mathbb R2d times 1 where varphi o is a multilayer perceptron with operatornametanh activation function The output module takes the output of the modeling module encoding vector embeddings of the questionaware recipe and the entities mathbf Y and the embedding of the answer mathbf A and returns a similarity score which is used while determining the correct answer Among all the candidate answer the one having the highest similarity score is chosen as the correct answer To train our proposed procedural reasoning network we employ a hinge ranking loss BIBREF15 similar to the one used in BIBREF2 given below where gamma is the margin parameter mathbf a and mathbf a are the correct and the incorrect answers respectively In this section we describe our experimental setup and then analyze the results of the proposed Procedural Reasoning Networks PRN model Given a recipe we automatically extract the entities from the initial step of a recipe by using a dictionary of ingredients While determining the ingredients we exploit Recipe1M BIBREF16 and Kaggle Whats Cooking Recipes BIBREF17 datasets and form our dictionary using the most commonly used ingredients in the training set of RecipeQA For the cases when no entity can be extracted from the recipe automatically 20 recipes in total we manually annotate those recipes with the related entities In our experiments we separately trained models on each task as well as we investigated multitask learning where a single model is trained to solve all these tasks at once In total the PRN architecture consists of sim 12M trainable parameters We implemented our models in PyTorch BIBREF18 using AllenNLP library BIBREF6 We used Adam optimizer with a learning rate of 1e4 with an early stopping criteria with the patience set to 10 indicating that the training procedure ends after 10 iterations if the performance would not improve We considered a batch size of 32 due to our hardware constraints In the multitask setting batches are sampled roundrobin from all tasks where each batch is solely composed of examples from one task We performed our experiments on a system containing four NVIDIA GTX1080Ti GPUs and training a single model took around 2 hours We employed the same hyperparameters for all the baseline systems We plan to share our code and model implementation after the review process We compare our model with several baseline models as described below We note that the results of the first two are previously reported in BIBREF2 Hasty Student BIBREF2 is a heuristicsbased simple model which ignores the recipe and gives an answer by examining only the question and the answer set using distances in the visual feature space Impatient Reader BIBREF19 is a simple neural model that takes its name from the fact that it repeatedly computes attention over the recipe after observing each image in the query BiDAF BIBREF14 is a strong reading comprehension model that employs a bidirectional attention flow mechanism to obtain a questionaware representation and bases its predictions on this representation Originally it is a spanselection model from the input context Here we adapt it to work in a multimodal setting and answer multiple choice questions instead BiDAF w static memory is an extended version of the BiDAF model which resembles our proposed PRN model in that it includes a memory unit for the entities However it does not make any updates on the memory cells That is it uses the static entity embeeddings initialized with GloVe word vectors We propose this baseline to test the significance of the use of relational memory updates Table TABREF29 presents the quantitative results for the visual reasoning tasks in RecipeQA In singletask training setting PRN gives stateoftheart results compared to other neural models Moreover it achieves the best performance on average These results demonstrate the importance of having a dynamic memory and keeping track of entities extracted from the recipe In multitask training setting where a single model is trained to solve all the tasks at once PRN and BIDAF w static memory perform comparably and give much better results than BIDAF Note that the model performances in the multitask training setting are worse than singletask performances We believe that this is due to the nature of the tasks that some are more difficult than the others We think that the performance could be improved by employing a carefully selected curriculum strategy BIBREF20 In Fig FIGREF28 we illustrate the entity embeddings space by projecting the learned embeddings from the stepbystep memory snapshots through time with tSNE to 3d space from 200d vector space Color codes denote the categories of the cooking recipes As can be seen these stepaware embeddings show clear clustering of these categories Moreover within each cluster the entities are grouped together in terms of their state characteristics For instance in the zoomed parts of the figure chopped and sliced or stirred and whisked entities are placed close to each other Fig FIGREF30 demonstrates the entity arithmetics using the learned embeddings from each entity step Here we show that the learned embedding from the memory snapshots can effectively capture the contextual information about the entities at each time point in the corresponding step while taking into account of the recipe data This basic arithmetic operation suggests that the proposed model can successfully capture the semantics of each entitys state in the corresponding step In recent years tracking entities and their state changes have been explored in the literature from a variety of perspectives In an early work BIBREF21 proposed a dynamic memory based network which updates entity states using a gating mechanism while reading the text BIBREF22 presented a more structured memory augmented model which employs memory slots for representing both entities and their relations BIBREF23 suggested a conceptually similar model in which the pairwise relations between attended memories are utilized to encode the world state The main difference between our approach and these works is that by utilizing relational memory core units we also allow memories to interact with each other during each update BIBREF24 showed that similar ideas can be used to compile supporting memories in tracking dialogue state BIBREF25 has shown the importance of coreference signals for reading comprehension task More recently BIBREF26 introduced a specialized recurrent layer which uses coreference annotations for improving reading comprehension tasks On language modeling task BIBREF27 proposed a language model which can explicitly incorporate entities while dynamically updating their representations for a variety of tasks such as language modeling coreference resolution and entity prediction Our work builds upon and contributes to the growing literature on tracking states changes in procedural text BIBREF0 presented a neural model that can learn to explicitly predict state changes of ingredients at different points in a cooking recipe BIBREF1 proposed another entityaware model to track entity states in scientific processes BIBREF3 demonstrated that the prediction quality can be boosted by including hard and soft constraints to eliminate unlikely or favor probable state changes In a followup work BIBREF4 exploited the notion of label consistency in training to enforce similar predictions in similar procedural contexts BIBREF28 proposed a model that dynamically constructs a knowledge graph while reading the procedural text to track the everchanging entities states As discussed in the introduction however these previous methods use a strong inductive bias and assume that state labels are present during training In our study we deliberately focus on unlabeled procedural data and ask the question Can multimodality help to identify and provide insights to understanding state changes We have presented a new neural architecture called Procedural Reasoning Networks PRN for multimodal understanding of stepbystep instructions Our proposed model is based on the successful BiDAF framework but also equipped with an explicit memory unit that provides an implicit mechanism to keep track of the changes in the states of the entities over the course of the procedure Our experimental analysis on visual reasoning tasks in the RecipeQA dataset shows that the model significantly improves the results of the previous models indicating that it better understands the procedural text and the accompanying images Additionally we carefully analyze our results and find that our approach learns meaningful dynamic representations of entities without any entitylevel supervision Although we achieve stateoftheart results on RecipeQA clearly there is still room for improvement compared to human performance We also believe that the PRN architecture will be of value to other visual and textual sequential reasoning tasks We thank the anonymous reviewers and area chairs for their invaluable feedback This work was supported by TUBA GEBIP fellowship awarded to E Erdem and by the MMVC project via an Institutional Links grant Project No 217E054 under the NewtonKatip elebi Fund partnership funded by the Scientific and Technological Research Council of Turkey TUBITAK and the British Council We also thank NVIDIA Corporation for the donation of GPUs used in this research </s>
<s>[INST] Active Learning for Chinese Word Segmentation in Medical Text [/INST] Electronic health records EHRs systematically collect patients clinical information such as health profiles histories of present illness past medical histories examination results and treatment plans BIBREF0 By analyzing EHRs many useful information closely related to patients can be discovered BIBREF1 Since Chinese EHRs are recorded without explicit word delimiters eg UTF8gkai diabetic ketoacidosis Chinese word segmentation CWS is a prerequisite for processing EHRs Currently stateoftheart CWS methods usually require large amounts of manuallylabeled data to reach their full potential However there are many challenges inherent in labeling EHRs First EHRs have many medical terminologies such as UTF8gkai hypertensive heart disease and UTF8gkai Rocephin so only annotators with medical backgrounds can be qualified to label EHRs Second EHRs may involve personal privacies of patients Therefore they cannot be openly published on a large scale for labeling The above two problems lead to the high annotation cost and insufficient training corpus in the research of CWS in medical text CWS was usually formulated as a sequence labeling task BIBREF2 which can be solved by supervised learning approaches such as hidden markov model HMM BIBREF3 and conditional random field CRF BIBREF4 However these methods rely heavily on handcrafted features To relieve the efforts of feature engineering neural networkbased methods are beginning to thrive BIBREF5 BIBREF6 BIBREF7 However due to insufficient annotated training data conventional models for CWS trained on open corpus often suffer from significant performance degradation when transferred to a domainspecific text Moreover the task in medical domain is rarely dabbled and only one related work on transfer learning is found in recent literatures BIBREF8 However researches related to transfer learning mostly remain in general domains causing a major problem that a considerable amount of manually annotated data is required when introducing the models into specific domains One of the solutions for this obstacle is to use active learning where only a small scale of samples are selected and labeled in an active manner Active learning methods are favored by the researchers in many natural language processing NLP tasks such as text classification BIBREF9 and named entity recognition NER BIBREF10 However only a handful of works are conducted on CWS BIBREF2 and few focuses on medical domain tasks Given the aforementioned challenges and current researches we propose a word segmentation method based on active learning To model the segmentation history we incorporate a sampling strategy consisting of word score link score and sequence score which effectively evaluates the segmentation decisions Specifically we combine information branch and gated neural network to determine if the segment is a legal word ie word score Meanwhile we use the hidden layer output of the long shortterm memory LSTM BIBREF11 to find out how the word is linked to its surroundings ie link score The final decision on the selection of labeling samples is made by calculating the average of word and link scores on the whole segmented sentence ie sequence score Besides to capture coherence over characters we additionally add Kmeans clustering features to the input of CRFbased word segmenter To sum up the main contributions of our work are summarized as follows The rest of this paper is organized as follows Section SECREF2 briefly reviews the related work on CWS and active learning Section SECREF3 presents an active learning method for CWS We experimentally evaluate our proposed method in Section SECREF4 Finally Section SECREF5 concludes the paper and envisions on future work In past decades researches on CWS have a long history and various methods have been proposed BIBREF13 BIBREF14 BIBREF15 which is an important task for Chinese NLP BIBREF7 These methods are mainly focus on two categories supervised learning and deep learning BIBREF2 Supervised Learning Methods Initially supervised learning methods were widelyused in CWS Xue BIBREF13 employed a maximum entropy tagger to automatically assign Chinese characters Zhao et al BIBREF16 used a conditional random field for tag decoding and considered both feature template selection and tag set selection However these methods greatly rely on manual feature engineering BIBREF17 while handcrafted features are difficult to design and the size of these features is usually very large BIBREF6 Deep Learning Methods Recently neural networks have been applied in CWS tasks To name a few Zheng et al BIBREF14 used deep layers of neural networks to learn feature representations of characters Chen et al BIBREF6 adopted LSTM to capture the previous important information Chen et al BIBREF18 proposed a gated recursive neural network GRNN which contains reset and update gates to incorporate the complicated combinations of characters Jiang and Tang BIBREF19 proposed a sequencetosequence transformer model to avoid overfitting and capture character information at the distant site of a sentence Yang et al BIBREF20 investigated subword information for CWS and integrated subword embeddings into a Lattice LSTM LaLSTM network However general word segmentation models do not work well in specific field due to lack of annotated training data Currently a handful of domainspecific CWS approaches have been studied but they focused on decentralized domains In the metallurgical field Shao et al BIBREF15 proposed a domainspecific CWS method based on BiLSTM model In the medical field Xing et al BIBREF8 proposed an adaptive multitask transfer learning framework to fully leverage domaininvariant knowledge from high resource domain to medical domain Meanwhile transfer learning still greatly focuses on the corpus in general domain When it comes to the specific domain large amounts of manuallyannotated data is necessary Active learning can solve this problem to a certain extent However due to the challenges faced by performing active learning on CWS only a few studies have been conducted On judgements Yan et al BIBREF21 adopted the local annotation strategy which selects substrings around the informative characters in active learning However their method still stays at the statistical level Unlike the above method we propose an active learning approach for CWS in medical text which combines information entropy with neural network to effectively reduce annotation cost Active learning BIBREF22 mainly aims to ease the data collection process by automatically deciding which instances should be labeled by annotators to train a model as quickly and effectively as possible BIBREF23 The sampling strategy plays a key role in active learning In the past decade the rapid development of active learning has resulted in various sampling strategies such as uncertainty sampling BIBREF24 querybycommittee BIBREF25 and information gain BIBREF26 Currently the most mainstream sampling strategy is uncertainty sampling It focuses its selection on samples closest to the decision boundary of the classifier and then chooses these samples for annotators to relabel BIBREF27 The formal definition of uncertainty sampling is to select a sample INLINEFORM0 that maximizes the entropy INLINEFORM1 over the probability of predicted classes DISPLAYFORM0 where INLINEFORM0 is a multidimensional feature vector INLINEFORM1 is its binary label and INLINEFORM2 is the predicted probability through which a classifier trained on training sets can map features to labels However in some complicated tasks such as CWS and NER only considering the uncertainty of classifier is obviously not enough Active learning methods can generally be described into two parts a learning engine and a selection engine BIBREF28 The learning engine is essentially a classifier which is mainly used for training of classification problems The selection engine is based on the sampling strategy which chooses samples that need to be relabeled by annotators from unlabeled data Then relabeled samples are added to training set for classifier to retrain thus continuously improving the accuracy of the classifier In this paper a CRFbased segmenter and a scoring model are employed as learning engine and selection engine respectively Fig FIGREF7 and Algorithm SECREF3 demonstrate the procedure of CWS based on active learning First we train a CRFbased segmenter by train set Then the segmenter is employed to annotate the unlabeled set roughly Subsequently information entropy based scoring model picks INLINEFORM0 lowest ranking samples for annotators to relabel Meanwhile the train sets and unlabeled sets are updated Finally we retrain the segmenter The above steps iterate until the desired accuracy is achieved or the number of iterations has reached a predefined threshold ht Active Learning for Chinese Word Segmentation labeled data INLINEFORM1 unlabeled data INLINEFORM2 the number of iterations INLINEFORM3 the number of samples selected per iteration INLINEFORM4 partitioning function INLINEFORM5 size INLINEFORM6 a word segmentation model INLINEFORM7 with the smallest test set loss INLINEFORM8 Initialize INLINEFORM9 train a word segmenter INLINEFORM0 estimate the test set loss INLINEFORM0 label INLINEFORM0 by INLINEFORM1 INLINEFORM0 to INLINEFORM1 INLINEFORM2 compute INLINEFORM3 by branch information entropy based scoring model select INLINEFORM0 lowest ranking samples INLINEFORM1 relabel INLINEFORM0 by annotators form a new labeled dataset INLINEFORM0 form a new unlabeled dataset INLINEFORM0 train a word segmenter INLINEFORM0 estimate the new test loss INLINEFORM0 compute the loss reduction INLINEFORM0 INLINEFORM0 INLINEFORM1 INLINEFORM0 INLINEFORM0 INLINEFORM1 with the smallest test set loss INLINEFORM2 INLINEFORM3 CWS can be formalized as a sequence labeling problem with character position tags which are B M E S So we convert the labeled data into the BMES format in which each character in the sequence is assigned into a label as follows one by one Bbeginning of a word Mmiddle of a word Eend of a word and Ssingle word In this paper we use CRF as a training model for CWS task Given the observed sequence CRF has a single exponential model for the joint probability of the entire sequence of labels while maximum entropy markov model MEMM BIBREF29 uses perstate exponential models for the conditional probabilities of next states BIBREF4 Therefore it can solve the label bias problem effectively Compared with neural networks it has less dependency on the corpus size First we preprocess EHRs at the characterlevel separating each character of raw EHRs For instance given a sentence INLINEFORM0 where INLINEFORM1 represents the INLINEFORM2 th character the separated form is INLINEFORM3 Then we employ Word2Vec BIBREF30 to train preprocessed EHRs to get character embeddings To capture interactions between adjacent characters Kmeans clustering algorithm BIBREF31 is utilized to feature the coherence over characters In general Kmeans divides INLINEFORM4 EHR characters into INLINEFORM5 groups of clusters and the similarity of EHR characters in the same cluster is higher With each iteration Kmeans can classify EHR characters into the nearest cluster based on distance to the mean vector Then recalculating and adjusting the mean vectors of these clusters until the mean vector converges Kmeans features explicitly show the difference between two adjacent characters and even multiple characters Finally we additionally add Kmeans clustering features to the input of CRFbased segmenter The segmenter makes positional tagging decisions over individual characters For example a Chinese segmented sentence UTF8gkai The patient was hospitalized for a long time in the nephrology department of our hospital is labeled as BEBESBEBMEBEBES To select the most appropriate sentences in a large number of unlabeled corpora we propose a scoring model based on information entropy and neural network as the sampling strategy of active learning which is inspired by Cai and Zhao BIBREF32 The score of a segmented sentence is computed as follows First mapping the segmented sentence to a sequence of candidate word embeddings Then the scoring model takes the word embedding sequence as input scoring over each individual candidate word from two perspectives 1 the possibility that the candidate word itself can be regarded as a legal word 2 the rationality of the link that the candidate word directly follows previous segmentation history Fig FIGREF10 illustrates the entire scoring model A gated neural network is employed over character embeddings to generate distributed representations of candidate words which are sent to a LSTM model We use gated neural network and information entropy to capture the likelihood of the segment being a legal word The architecture of word score model is depicted in Fig FIGREF12 Gated Combination Neural Network GCNN To effectively learn word representations through character embeddings we use GCNN BIBREF32 The architecture of GCNN is demonstrated in Fig FIGREF13 which includes update gate and reset gate The gated mechanism not only captures the characteristics of the characters themselves but also utilizes the interaction between the characters There are two types of gates in this network structure reset gates and update gates These two gated vectors determine the final output of the gated recurrent neural network where the update gate helps the model determine what to be passed and the reset gate primarily helps the model decide what to be cleared In particular the word embedding of a word with INLINEFORM0 characters can be computed as DISPLAYFORM0 where INLINEFORM0 and INLINEFORM1 are update gates for new combination vector INLINEFORM2 and the ith character INLINEFORM3 respectively the combination vector INLINEFORM4 is formalized as DISPLAYFORM0 where INLINEFORM0 and INLINEFORM1 are reset gates for characters Left and Right Branch Information Entropy In general each string in a sentence may be a word However compared with a string which is not a word the string of a word is significantly more independent The branch information entropy is usually used to judge whether each character in a string is tightly linked through the statistical characteristics of the string which reflects the likelihood of a string being a word The left and right branch information entropy can be formalized as follows DISPLAYFORM0 DISPLAYFORM1 where INLINEFORM0 denotes the INLINEFORM1 th candidate word INLINEFORM2 denotes the character set INLINEFORM3 denotes the probability that character INLINEFORM4 is on the left of word INLINEFORM5 and INLINEFORM6 denotes the probability that character INLINEFORM7 is on the right of word INLINEFORM8 INLINEFORM9 and INLINEFORM10 respectively represent the left and right branch information entropy of the candidate word INLINEFORM11 If the left and right branch information entropy of a candidate word is relatively high the probability that the candidate word can be combined with the surrounded characters to form a word is low thus the candidate word is likely to be a legal word To judge whether the candidate words in a segmented sentence are legal words we compute the left and right entropy of each candidate word then take average as the measurement standard DISPLAYFORM0 We represent a segmented sentence with INLINEFORM0 candidate words as INLINEFORM1 INLINEFORM2 INLINEFORM3 so the INLINEFORM4 INLINEFORM5 of the INLINEFORM6 th candidate word is computed by its average entropy DISPLAYFORM0 In this paper we use LSTM to capture the coherence between words in a segmented sentence This neural network is mainly an optimization for traditional RNN RNN is widely used to deal with timeseries prediction problems The result of its current hidden layer is determined by the input of the current layer and the output of the previous hidden layer BIBREF33 Therefore RNN can remember historical results However traditional RNN has problems of vanishing gradient and exploding gradient when training long sequences BIBREF34 By adding a gated mechanism to RNN LSTM effectively solves these problems which motivates us to get the link score with LSTM Formally the LSTM unit performs the following operations at time step INLINEFORM0 DISPLAYFORM0 DISPLAYFORM1 where INLINEFORM0 INLINEFORM1 INLINEFORM2 are the inputs of LSTM all INLINEFORM3 and INLINEFORM4 are a set of parameter matrices to be trained and INLINEFORM5 is a set of bias parameter matrices to be trained INLINEFORM6 and INLINEFORM7 operation respectively represent matrix elementwise multiplication and sigmoid function In the LSTM unit there are two hidden layers INLINEFORM8 INLINEFORM9 where INLINEFORM10 is the internal memory cell for dealing with vanishing gradient while INLINEFORM11 is the main output of the LSTM unit for complex operations in subsequent layers We denotes INLINEFORM0 as the word embedding of time step INLINEFORM1 a prediction INLINEFORM2 of next word embedding INLINEFORM3 can be computed by hidden layer INLINEFORM4 DISPLAYFORM0 Therefore link score of next word embedding INLINEFORM0 can be computed as DISPLAYFORM0 Due to the structure of LSTM vector INLINEFORM0 contains important information of entire segmentation decisions In this way the link score gets the result of the sequencelevel word segmentation not just wordlevel Intuitively we can compute the score of a segmented sequence by summing up word scores and link scores However we find that a sequence with more candidate words tends to have higher sequence scores Therefore to alleviate the impact of the number of candidate words on sequence scores we calculate final scores as follows DISPLAYFORM0 where INLINEFORM0 denotes the INLINEFORM1 th segmented sequence with INLINEFORM2 candidate words and INLINEFORM3 represents the INLINEFORM4 th candidate words in the segmented sequence When training the model we seek to minimize the sequence score of the corrected segmented sentence and the predicted segmented sentence DISPLAYFORM0 where INLINEFORM0 is the loss function We collect 204 EHRs with cardiovascular diseases from the Shuguang Hospital Affiliated to Shanghai University of Traditional Chinese Medicine and each contains 27 types of records We choose 4 different types with a total of 3868 records from them which are first course reports medical records chief ward round records and discharge records The detailed information of EHRs are listed in Table TABREF32 We split our datasets as follows First we randomly select 3200 records from 3868 records as unlabeled set Then we manually annotate remaining 668 records as labeled set which contains 1170 sentences Finally we divide labeled set into train set and test set with the ratio of 73 randomly Statistics of datasets are listed in Table TABREF33 To determine suitable parameters we divide training set into two sets the first 80 sentences as training set and the rest 20 sentences as validation set Character embedding dimensions and Kmeans clusters are two main parameters in the CRFbased word segmenter In this paper we choose characterbased CRF without any features as baseline First we use Word2Vec to train character embeddings with dimensions of 50 100 150 200 300 400 respectively thus we obtain 6 different dimensional character embeddings Second these six types of character embeddings are used as the input to Kmeans algorithm with the number of clusters 50 100 200 300 400 500 600 respectively to capture the corresponding features of character embeddings Then we add Kmeans clustering features to baseline for training As can be seen from Fig FIGREF36 when the character embedding dimension INLINEFORM0 150 and the number of clusters INLINEFORM1 400 CRFbased word segmenter performs best so these two parameters are used in subsequent experiments Hyperparameters of neural network have a great impact on the performance The hyperparameters we choose are listed in Table TABREF38 The dimension of character embeddings is set as same as the parameter used in CRFbased word segmenter and the number of hidden units is also set to be the same as it Maximum word length is ralated to the number of parameters in GCNN unit Since there are many long medical terminologies in EHRs we set the maximum word length as 6 In addition dropout is an effective way to prevent neural networks from overfitting BIBREF35 To avoid overfitting we drop the input layer of the scoring model with the rate of 20 Our work experimentally compares two mainstream CWS tools LTP and Jieba on training and testing sets These two tools are widely used and recognized due to their high INLINEFORM0 score of word segmentation in general fields However in specific fields there are many terminologies and uncommon words which lead to the unsatisfactory performance of segmentation results To solve the problem of word segmentation in specific fields these two tools provide a custom dictionary for users In the experiments we also conduct a comparative experiment on whether external domain dictionary has an effect on the experimental results We manually construct the dictionary when labeling EHRs From the results in Table TABREF41 we find that Jieba benefits a lot from the external dictionary However the Recall of LTP decreases when joining the domain dictionary Generally speaking since these two tools are trained by general domain corpus the results are not ideal enough to cater to the needs of subsequent NLP of EHRs when applied to specific fields To investigate the effectiveness of Kmeans features in CRFbased segmenter we also compare Kmeans with 3 different clustering features including MeanShift BIBREF36 SpectralClustering BIBREF37 and DBSCAN BIBREF38 on training and testing sets From the results in Table TABREF43 by adding additional clustering features in CRFbased segmenter there is a significant improvement of INLINEFORM0 score which indicates that clustering features can effectively capture the semantic coherence between characters Among these clustering features Kmeans performs best so we utlize Kmeans results as additional features for CRFbased segmenter In this experiment since uncertainty sampling is the most popular strategy in real applications for its simpleness and effectiveness BIBREF27 we compare our proposed strategy with uncertainty sampling in active learning We conduct our experiments as follows First we employ CRFbased segmenter to annotate the unlabeled set Then sampling strategy in active learning selects a part of samples for annotators to relabel Finally the relabeled samples are added to train set for segmenter to retrain Our proposed scoring strategy selects samples according to the sequence scores of the segmented sentences while uncertainty sampling suggests relabeling samples that are closest to the segmenters decision boundary Generally two main parameters in active learning are the numbers of iterations and samples selected per iteration To fairly investigate the influence of two parameters we compare our proposed strategy with uncertainty sampling on the same parameter We find that though the number of iterations is large enough it has a limited impact on the performance of segmenter Therefore we choose 30 as the number of iterations which is a good tradeoff between speed and performance As for the number of samples selected per iteration there are 6078 sentences in unlabeled set considering the high cost of relabeling we set four sizes of samples selected per iteration which are 2 5 8 and 11 The experimental results of two sampling strategies with 30 iterations on four different proportions of relabeled data are shown in Fig FIGREF45 where xaxis represents the number of iterations and yaxis denotes the INLINEFORM0 score of the segmenter Scoring strategy shows consistent improvements over uncertainty sampling in the early iterations indicating that scoring strategy is more capable of selecting representative samples Furthermore we also investigate the relations between the best INLINEFORM0 score and corresponding number of iteration on two sampling strategies which is depicted in Fig FIGREF46 It is observed that in our proposed scoring model with the proportion of relabeled data increasing the iteration number of reaching the optimal word segmentation result is decreasing but the INLINEFORM0 score of CRFbased word segmenter is also gradually decreasing When the proportion is 2 the segmenter reaches the highest INLINEFORM1 score 9062 Obviously our proposed strategy outperforms uncertainty sampling by a large margin Our proposed method needs only 2 relabeled samples to obtain INLINEFORM2 score of 9062 while uncertainty sampling requires 8 samples to reach its best INLINEFORM3 score of 8898 which indicates that with our proposed method we only need to manually relabel a small number of samples to achieve a desired segmentation result To relieve the efforts of EHRs annotation we propose an effective word segmentation method based on active learning in which the sampling strategy is a scoring model combining information entropy with neural network Compared with the mainstream uncertainty sampling our strategy selects samples from statistical perspective and deep learning level In addition to capture coherence between characters we add Kmeans clustering features to CRFbased word segmenter Based on EHRs collected from the Shuguang Hospital Affiliated to Shanghai University of Traditional Chinese Medicine we evaluate our method on CWS task Compared with uncertainty sampling our method requires 6 less relabeled samples to achieve better performance which proves that our method can save the cost of manual annotation to a certain extent In future we plan to employ other widelyused deep neural networks such as convolutional neural network and attention mechanism in the research of EHRs segmentation Then we believe that our method can be applied to other tasks as well so we will fully investigate the application of our method in other tasks such as NER and relation extraction The authors would like to appreciate any suggestions or comments from the anonymous reviewers This work was supported by the National Natural Science Foundation of China No 61772201 and the National Key RD Program of China for Precision medical research No 2018YFC0910550 </s>
<s>[INST] InScript: Narrative texts annotated with script information [/INST] A script is a standardized sequence of events that describes some stereotypical human activity such as going to a restaurant or visiting a doctor BIBREF0 Script events describe an actionactivity along with the involved participants For example in the script describing a visit to a restaurant typical events are entering the restaurant ordering food or eating Participants in this scenario can include animate objects like the waiter and the customer as well as inanimate objects such as cutlery or food Script knowledge has been shown to play an important role in text understanding cullingford1978script miikkulainen1995script mueller2004understanding Chambers2008 Chambers2009 modi2014inducing rudinger2015learning It guides the expectation of the reader supports coreference resolution as well as commonsense knowledge inference and enables the appropriate embedding of the current sentence into the larger context Figure 1 shows the first few sentences of a story describing the scenario taking a bath Once the taking a bath scenario is evoked by the noun phrase NP a bath the reader can effortlessly interpret the definite NP the faucet as an implicitly present standard participant of the taking a bath script Although in this story entering the bath room turning on the water and filling the tub are explicitly mentioned a reader could nevertheless have inferred the turning on the water event even if it was not explicitly mentioned in the text Table 1 gives an example of typical events and participants for the script describing the scenario taking a bath A systematic study of the influence of script knowledge in texts is far from trivial Typically text documents eg narrative texts describing various scenarios evoke many different scripts making it difficult to study the effect of a single script Efforts have been made to collect scenariospecific script knowledge via crowdsourcing for example the OMICS and SMILE corpora singh2002open Regneri2010 Regneri2013 but these corpora describe script events in a pointwise telegram style rather than in full texts This paper presents the InScript corpus Narrative Texts Instantiating Script structure It is a corpus of simple narrative texts in the form of stories wherein each story is centered around a specific scenario The stories have been collected via Amazon Mechanical Turk MTurk In this experiment turkers were asked to write down a concrete experience about a bus ride a grocery shopping event etc We concentrated on 10 scenarios and collected 100 stories per scenario giving a total of 1000 stories with about 200000 words Relevant verbs and noun phrases in all stories are annotated with event types and participant types respectively Additionally the texts have been annotated with coreference information in order to facilitate the study of the interdependence between script structure and coreference The InScript corpus is a unique resource that provides a basis for studying various aspects of the role of script knowledge in language processing by humans The acquisition of this corpus is part of a larger research effort that aims at using script knowledge to model the surprisal and information density in written text Besides InScript this project also released a corpus of generic descriptions of script activities called DeScript for Describing Script Structure Wanzare2016 DeScript contains a range of short and textually simple phrases that describe script events in the style of OMICS or SMILE singh2002open Regneri2010 These generic telegramstyle descriptions are called Event Descriptions EDs a sequence of such descriptions that cover a complete script is called an Event Sequence Description ESD Figure 2 shows an excerpt of a script in the baking a cake scenario The figure shows event descriptions for 3 different events in the DeScript corpus left and fragments of a story in the InScript corpus right that instantiate the same event type We selected 10 scenarios from different available scenario lists eg Regneri2010 VanDerMeer2009 and the OMICS corpus BIBREF1 including scripts of different complexity Taking a bath vs Flying in an airplane and specificity Riding a public bus vs Repairing a flat bicycle tire For the full scenario list see Table 2 Texts were collected via the Amazon Mechanical Turk platform which provides an opportunity to present an online task to humans aka turkers In order to gauge the effect of different MTurk instructions on our task we first conducted pilot experiments with different variants of instructions explaining the task We finalized the instructions for the full data collection asking the turkers to describe a scenario in form of a story as if explaining it to a child and to use a minimum of 150 words The selected instruction variant resulted in comparably simple and explicit scenariorelated stories In the future we plan to collect more complex stories using different instructions In total 190 turkers participated All turkers were living in the USA and native speakers of English We paid USD 050 per story to each turker On average the turkers took 937 minutes per story with a maximum duration of 1738 minutes Statistics for the corpus are given in Table 2 On average each story has a length of 12 sentences and 217 words with 98 word types on average Stories are coherent and concentrate mainly on the corresponding scenario Neglecting auxiliaries modals and copulas on average each story has 32 verbs out of which 58 denote events related to the respective scenario As can be seen in Table 2 there is some variation in stories across scenarios The flying in an airplane scenario for example is most complex in terms of the number of sentences tokens and word types that are used This is probably due to the inherent complexity of the scenario Taking a flight for example is more complicated and takes more steps than taking a bath The average count of sentences tokens and types is also very high for the baking a cake scenario Stories from the scenario often resemble cake recipes which usually contain very detailed steps so people tend to give more detailed descriptions in the stories For both flying in an airplane and baking a cake the standard deviation is higher in comparison to other scenarios This indicates that different turkers described the scenario with a varying degree of detail and can also be seen as an indicator for the complexity of both scenarios In general different people tend to describe situations subjectively with a varying degree of detail In contrast texts from the taking a bath and planting a tree scenarios contain a relatively smaller number of sentences and fewer word types and tokens Both planting a tree and taking a bath are simpler activities which results in generally less complex texts The average pairwise word type overlap can be seen as a measure of lexical variety among stories If it is high the stories resemble each other more We can see that stories in the flying in an airplane and baking a cake scenarios have the highest values here indicating that most turkers used a similar vocabulary in their stories In general the response quality was good We had to discard 9 of the stories as these lacked the quality we were expecting In total we selected 910 stories for annotation This section deals with the annotation of the data We first describe the final annotation schema Then we describe the iterative process of corpus annotation and the refinement of the schema This refinement was necessary due to the complexity of the annotation For each of the scenarios we designed a specific annotation template A script template consists of scenariospecific event and participant labels An example of a template is shown in Table 1 All NP heads in the corpus were annotated with a participant label all verbs were annotated with an event label For both participants and events we also offered the label unclear if the annotator could not assign another label We additionally annotated coreference chains between NPs Thus the process resulted in three layers of annotation event types participant types and coreference annotation These are described in detail below As a first layer we annotated event types There are two kinds of event type labels scenariospecific event type labels and general labels The general labels are used across every scenario and mark general features for example whether an event belongs to the scenario at all For the scenariospecific labels we designed an unique template for every scenario with a list of scriptrelevant event types that were used as labels Such labels include for example ScrEvclosedrain in taking a bath as in Example UID10 see Figure 1 for a complete list for the taking a bath scenario I start by closing textsc scriptsize ScrEvclosedrain the drain at the bottom of the tub The general labels that were used in addition to the scriptspecific labels in every scenario are listed below ScrEvother An event that belongs to the scenario but its event type occurs too infrequently for details see below Section Modification of the Schema We used the label other because event classification would become too finegrained otherwise Example After I am dried I put my new clothes on and clean up textsc scriptsize ScrEvother the bathroom RelNScrEv Related nonscript event An event that can plausibly happen during the execution of the script and is related to it but that is not part of the script Example After finding on what I wanted to wear I went into the bathroom and shut textsc scriptsize RelNScrEv the door UnrelEv An event that is unrelated to the script Example I sank into the bubbles and took textsc scriptsize UnrelEv a deep breath Additionally the annotators were asked to annotate verbs and phrases that evoke the script without explicitly referring to a script event with the label Evoking as shown in Example UID10 Today I took a bath textsc scriptsize Evoking in my new apartment As in the case of the event type labels there are two kinds of participant labels general labels and scenariospecific labels The latter are part of the scenariospecific templates eg ScrPartdrain in the taking a bath scenario as can be seen in Example UID15 I start by closing the drain textsc scriptsize ScrPartdrain at the bottom of the tub The general labels that are used across all scenarios mark noun phrases with scenarioindependent features There are the following general labels ScrPartother A participant that belongs to the scenario but its participant type occurs only infrequently Example I find my bath mat textsc scriptsize ScrPartother and lay it on the floor to keep the floor dry NPart Nonparticipant A referential NP that does not belong to the scenario Example I washed myself carefully because I did not want to spill water onto the floor textsc scriptsize NPart labeled SuppVComp A support verb complement For further discussion of this label see Section Special Cases Example I sank into the bubbles and took a deep breath textsc scriptsize SuppVComp HeadofPartitive The head of a partitive or a partitivelike construction For a further discussion of this label cf Section Special Cases Example I grabbed a bar textsc scriptsize HeadofPartitive of soap and lathered my body Nolabel A nonreferential noun phrase that cannot be labeled with another label Example I sat for a moment textsc scriptsize Nolabel relaxing allowing the warm water to sooth my skin All NPs labeled with one of the labels SuppVComp HeadofPartitive or Nolabel are considered to be nonreferential Nolabel is used mainly in four cases in our data nonreferential time expressions in a while a million times better idioms no matter what the nonreferential it it felt amazing it is better and other abstracta a lot better a little bit In the first annotation phase annotators were asked to mark verbs and noun phrases that have an event or participant type that is not listed in the template as MissScrEv MissScrPart missing script event or participant resp These annotations were used as a basis for extending the templates see Section Modification of the Schema and replaced later by newly introduced labels or ScrEvother and ScrPartother respectively All noun phrases were annotated with coreference information indicating which entities denote the same discourse referent The annotation was done by linking heads of NPs see Example UID21 where the links are indicated by coindexing As a rule we assume that each element of a coreference chain is marked with the same participant type label I textsc scriptsize Coref1 washed my textsc scriptsize Coref1 entire body textsc scriptsize Coref2 starting with my textsc scriptsize Coref1 face textsc scriptsize Coref3 and ending with the toes textsc scriptsize Coref4 I textsc scriptsize Coref1 always wash my textsc scriptsize Coref1 toes textsc scriptsize Coref4 very thoroughly The assignment of an entity to a referent is not always trivial as is shown in Example UID21 There are some cases in which two discourse referents are grouped in a plural NP In the example those things refers to the group made up of shampoo soap and sponge In this case we asked annotators to introduce a new coreference label the name of which indicates which referents are grouped together Corefgroupwashingtools All NPs are then connected to the group phrase resulting in an additional coreference chain I textsc scriptsize Coref1 made sure that I textsc scriptsize Coref1 have my textsc scriptsize Coref1 shampoo textsc scriptsize Coref2 Corefgroupwashingtools soap textsc scriptsize Coref3 Corefgroupwashingtools and sponge textsc scriptsize Coref4 Corefgroupwashingtools ready to get in Once I textsc scriptsize Coref1 have those things textsc scriptsize Corefgroupwashingtools I textsc scriptsize Coref1 sink into the bath I textsc scriptsize Coref1 applied some soap textsc scriptsize Coref10 on my textsc scriptsize Coref11 body and used the sponge textsc scriptsize Coref12 to scrub a bit I textsc scriptsize Coref13 rinsed the shampoo textsc scriptsize Coref14 Example UID21 thus contains the following coreference chains Coref1 I textsc scriptsize Coref15 I textsc scriptsize Coref16 my textsc scriptsize Coref17 I textsc scriptsize Coref18 I textsc scriptsize Coref19 I textsc scriptsize Coref10 my textsc scriptsize Coref11 I Coref2 shampoo rightarrow shampoo Coref3 soap rightarrow soap Coref4 sponge rightarrow sponge Corefgroupwashing tools shampoo rightarrow soap rightarrow sponge rightarrow things The templates were carefully designed in an iterated process For each scenario one of the authors of this paper provided a preliminary version of the template based on the inspection of some of the stories For a subset of the scenarios preliminary templates developed at our department for a psycholinguistic experiment on script knowledge were used as a starting point Subsequently the authors manually annotated 5 randomly selected texts for each of the scenarios based on the preliminary template Necessary extensions and changes in the templates were discussed and agreed upon Most of the cases of disagreement were related to the granularity of the event and participant types We agreed on the scriptspecific functional equivalence as a guiding principle For example reading a book listening to music and having a conversation are subsumed under the same event label in the flight scenario because they have the common function of inflight entertainment in the scenario In contrast we assumed different labels for the cake tin and other utensils bowls etc since they have different functions in the baking a cake scenario and accordingly occur with different script events Note that scripts and templates as such are not meant to describe an activity as exhaustively as possible and to mention all steps that are logically necessary Instead scripts describe cognitively prominent events in an activity An example can be found in the flight scenario While more than a third of the turkers mentioned the event of fastening the seat belts in the plane buckleseatbelt no person wrote about undoing their seat belts again although in reality both events appear equally often Consequently we added an event type label for buckling up but no label for undoing the seat belts We used the WebAnno annotation tool BIBREF2 for our project The stories from each scenario were distributed among four different annotators In a calibration phase annotators were presented with some sample texts for test annotations the results were discussed with the authors Throughout the whole annotation phase annotators could discuss any emerging issues with the authors All annotations were done by undergraduate students of computational linguistics The annotation was rather timeconsuming due to the complexity of the task and thus we decided for single annotation mode To assess annotation quality a small sample of texts was annotated by all four annotators and their interannotator agreement was measured see Section InterAnnotator Agreement It was found to be sufficiently high Annotation of the corpus together with some pre and postprocessing of the data required about 500 hours of work All stories were annotated with event and participant types a total of 12188 and 43946 instances respectively On average there were 7 coreference chains per story with an average length of 6 tokens After the first annotation round we extended and changed the templates based on the results As mentioned before we used MissScrEv and MissScrPart labels to mark verbs and noun phrases instantiating events and participants for which no appropriate labels were available in the templates Based on the instances with these labels a total of 941 and 1717 instances respectively we extended the guidelines to cover the sufficiently frequent cases In order to include new labels for event and participant types we tried to estimate the number of instances that would fall under a certain label We added new labels according to the following conditions For the participant annotations we added new labels for types that we expected to appear at least 10 times in total in at least 5 different stories ie in approximately 5 of the stories For the event annotations we chose those new labels for event types that would appear in at least 5 different stories In order to avoid too fine a granularity of the templates all other instances of MissScrEv and MissScrPart were relabeled with ScrEvother and ScrPartother We also relabeled participants and events from the first annotation phase with ScrEvother and ScrPartother if they did not meet the frequency requirements The event label airbathroom the event of letting fresh air into the room after the bath for example was only used once in the stories so we relabeled that instance to ScrEvother Additionally we looked at the DeScript corpus BIBREF3 which contains manually clustered event paraphrase sets for the 10 scenarios that are also covered by InScript see Section Comparison to the DeScript Corpus Every such set contains event descriptions that describe a certain event type We extended our templates with additional labels for these events if they were not yet part of the template Nounnoun compounds were annotated twice with the same label whole span plus the head noun as indicated by Example UID31 This redundant double annotation is motivated by potential processing requirements I get my wash cloth textsc scriptsize ScrPartwashingtools textsc scriptsize ScrPartwashingtools and put it under the water A special treatment was given to support verb constructions such as take time get home or take a seat in Example UID32 The semantics of the verb itself is highly underspecified in such constructions the event type is largely dependent on the object NP As shown in Example UID32 we annotate the head verb with the event type described by the whole construction and label its object with SuppVComp support verb complement indicating that it does not have a proper reference I step into the tub and take textsc scriptsize ScrEvsinkwater a seat textsc scriptsize SuppVComp We used the HeadofPartitive label for the heads in partitive constructions assuming that the only referential part of the construction is the complement This is not completely correct since different partitive heads vary in their degree of concreteness cf Examples UID33 and UID33 but we did not see a way to make the distinction sufficiently transparent to the annotators Our seats were at the back textsc scriptsize HeadofPartitive of the train textsc scriptsize ScrParttrain In the library you can always find a couple textsc scriptsize HeadofPartitive of interesting books textsc scriptsize ScrPartbook Group denoting NPs sometimes refer to groups whose members are instances of different participant types In Example UID34 the firstperson plural pronoun refers to the group consisting of the passenger I and a nonparticipant my friend To avoid a proliferation of event type labels we labeled these cases with Unclear I textsc scriptsize ScrPartpassenger wanted to visit my textsc scriptsize ScrPartpassenger friend textsc scriptsize NPart in New York We textsc scriptsize Unclear met at the train station We made an exception for the Getting a Haircut scenario where the mixed participant group consisting of the hairdresser and the customer occurs very often as in Example UID34 Here we introduced the additional adhoc participant label ScrParthairdressercustomer While Susan textsc scriptsize ScrParthairdresser is cutting my textsc scriptsize ScrPartcustomer hair we textsc scriptsize ScrParthairdressercustomer usually talk a bit In order to calculate interannotator agreement a total of 30 stories from 6 scenarios were randomly chosen for parallel annotation by all 4 annotators after the first annotation phase We checked the agreement on these data using Fleiss Kappa BIBREF4 The results are shown in Figure 4 and indicate moderate to substantial agreement BIBREF5 Interestingly if we calculated the Kappa only on the subset of cases that were annotated with scriptspecific event and participant labels by all annotators results were better than those of the evaluation on all labeled instances including also unrelated and related nonscript events This indicates one of the challenges of the annotation task In many cases it is difficult to decide whether a particular event should be considered a central script event or an event loosely related or unrelated to the script For coreference chain annotation we calculated the percentage of pairs which were annotated by at least 3 annotators qualified majority vote compared to the set of those pairs annotated by at least one person see Figure 4 We take the result of 905 between annotators to be a good agreement Figure 5 gives an overview of the number of event and participant types provided in the templates Taking a flight and getting a haircut stand out with a large number of both event and participant types which is due to the inherent complexity of the scenarios In contrast planting a tree and going on a train contain the fewest labels There are 19 event and participant types on average Figure 6 presents overview statistics about the usage of event labels participant labels and coreference chain annotations As can be seen there are usually many more mentions of participants than events For coreference chains there are some chains that are really long which also results in a large scenariowise standard deviation Usually these chains describe the protagonist We also found again that the flying in an airplane scenario stands out in terms of participant mentions event mentions and average number of coreference chains Figure 7 shows for every participant label in the baking a cake scenario the number of stories which they occurred in This indicates how relevant a participant is for the script As can be seen a small number of participants are highly prominent cook ingredients and cake are mentioned in every story The fact that the protagonist appears most often consistently holds for all other scenarios where the acting person appears in every story and is mentioned most frequently Figure 8 shows the distribution of participantevent type labels over all appearances over all scenarios on average The groups stand for the most frequently appearing label the top 2 to 5 labels in terms of frequency and the top 6 to 10 ScrEvother and ScrPartother are shown separately As can be seen the most frequently used participant label the protagonist makes up about 40 of overall participant instances The four labels that follow the protagonist in terms of frequency together appear in 37 of the cases More than 2 out of 3 participants in total belong to one of only 5 labels In contrast the distribution for events is more balanced 14 of all event instances have the most prominent event type ScrEvother and ScrPartother both appear as labels in at most 5 of all event and participant instantiations The specific event and participant type labels in our templates cover by far most of the instances In Figure 9 we grouped participants similarly into the first the top 25 and top 610 most frequently appearing participant types The figure shows for each of these groups the average frequency per story and in the rightmost column the overall average The results correspond to the findings from the last paragraph As mentioned previously the InScript corpus is part of a larger research project in which also a corpus of a different kind the DeScript corpus was created DeScript covers 40 scenarios and also contains the 10 scenarios from InScript This corpus contains texts that describe scripts on an abstract and generic level while InScript contains instantiations of scripts in narrative texts Script events in DeScript are described in a very simple telegramstyle language see Figure 2 Since one of the longterm goals of the project is to align the InScript texts with the script structure given from DeScript it is interesting to compare both resources The InScript corpus exhibits much more lexical variation than DeScript Many approaches use the typetoken ratio to measure this variance However this measure is known to be sensitive to text length see eg Tweedie1998 which would result in very small values for InScript and relatively large ones for DeScript given the large average difference of text lengths between the corpora Instead we decided to use the Measure of Textual Lexical Diversity MTLD McCarthy2010 McCarthy2005 which is familiar in corpus linguistics This metric measures the average number of tokens in a text that are needed to retain a typetoken ratio above a certain threshold If the MTLD for a text is high many tokens are needed to lower the typetoken ratio under the threshold so the text is lexically diverse In contrast a low MTLD indicates that only a few words are needed to make the typetoken ratio drop so the lexical diversity is smaller We use the threshold of 071 which is proposed by the authors as a wellproven value Figure 10 compares the lexical diversity of both resources As can be seen the InScript corpus with its narrative texts is generally much more diverse than the DeScript corpus with its short event descriptions across all scenarios For both resources the flying in an airplane scenario is most diverse as was also indicated above by the mean word type overlap However the difference in the variation of lexical variance of scenarios is larger for DeScript than for InScript Thus the properties of a scenario apparently influence the lexical variance of the event descriptions more than the variance of the narrative texts We used entropy BIBREF6 over lemmas to measure the variance of lexical realizations for events We excluded events for which there were less than 10 occurrences in DeScript or InScript Since there is only an event annotation for 50 ESDs per scenario in DeScript we randomly sampled 50 texts from InScript for computing the entropy to make the numbers more comparable Figure 11 shows as an example the entropy values for the event types in the going on a train scenario As can be seen in the graph the entropy for InScript is in general higher than for DeScript In the stories a wider variety of verbs is used to describe events There are also large differences between events While wait has a really low entropy spendtimetrain has an extremely high entropy value This event type covers many different activities such as reading sleeping etc In this paper we described the InScript corpus of 1000 narrative texts annotated with script structure and coreference information We described the annotation process various difficulties encountered during annotation and different remedies that were taken to overcome these One of the future research goals of our project is also concerned with finding automatic methods for texttoscript mapping ie for the alignment of text segments with script states We consider InScript and DeScript together as a resource for studying this alignment The corpus shows rich lexical variation and will serve as a unique resource for the study of the role of script knowledge in natural language processing This research was funded by the German Research Foundation DFG as part of SFB 1102 Information Density and Linguistic Encoding </s>
<s>[INST] Investigating Robustness and Interpretability of Link Prediction via Adversarial Modifications [/INST] Knowledge graphs KG play a critical role in many realworld applications such as search structured data management recommendations and question answering Since KGs often suffer from incompleteness and noise in their facts links a number of recent techniques have proposed models that embed each entity and relation into a vector space and use these embeddings to predict facts These dense representation models for link prediction include tensor factorization BIBREF0 BIBREF1 BIBREF2 algebraic operations BIBREF3 BIBREF4 BIBREF5 multiple embeddings BIBREF6 BIBREF7 BIBREF8 BIBREF9 and complex neural models BIBREF10 BIBREF11 However there are only a few studies BIBREF12 BIBREF13 that investigate the quality of the different KG models There is a need to go beyond just the accuracy on link prediction and instead focus on whether these representations are robust and stable and what facts they make use of for their predictions In this paper our goal is to design approaches that minimally change the graph structure such that the prediction of a target fact changes the most after the embeddings are relearned which we collectively call Completion Robustness and Interpretability via Adversarial Graph Edits First we consider perturbations that red50blackremove a neighboring link for the target fact thus identifying the most influential related fact providing an explanation for the models prediction As an example consider the excerpt from a KG in Figure 1 with two observed facts and a target predicted fact that Princes Henriette is the parent of Violante Bavaria Our proposed graph perturbation shown in Figure 1 identifies the existing fact that Ferdinal Maria is the father of Violante Bavaria as the one when removed and model retrained will change the prediction of Princes Henriettes child We also study attacks that green50blackadd a new fake fact into the KG to evaluate the robustness and sensitivity of link prediction models to small additions to the graph An example attack for the original graph in Figure 1 is depicted in Figure 1 Such perturbations to the the training data are from a family of adversarial modifications that have been applied to other machine learning tasks known as poisoning BIBREF14 BIBREF15 BIBREF16 BIBREF17 Since the setting is quite different from traditional adversarial attacks search for link prediction adversaries brings up unique challenges To find these minimal changes for a target link we need to identify the fact that when added into or removed from the graph will have the biggest impact on the predicted score of the target fact Unfortunately computing this change in the score is expensive since it involves retraining the model to recompute the embeddings We propose an efficient estimate of this score change by approximating the change in the embeddings using Taylor expansion The other challenge in identifying adversarial modifications for link prediction especially when considering addition of fake facts is the combinatorial search space over possible facts which is intractable to enumerate We introduce an inverter of the original embedding model to decode the embeddings to their corresponding graph components making the search of facts tractable by performing efficient gradientbased continuous optimization We evaluate our proposed methods through following experiments First on relatively small KGs we show that our approximations are accurate compared to the true change in the score Second we show that our additive attacks can effectively reduce the performance of state of the art models BIBREF2 BIBREF10 up to 273 and 507 in Hits1 for two large KGs WN18 and YAGO310 We also explore the utility of adversarial modifications in explaining the model predictions by presenting rulelike descriptions of the most influential neighbors Finally we use adversaries to detect errors in the KG obtaining up to 55 accuracy in detecting errors In this section we briefly introduce some notations and existing relational embedding approaches that model knowledge graph completion using dense vectors In KGs facts are represented using triples of subject relation and object langle s r orangle where soin xi the set of entities and rin the set of relations To model the KG a scoring function psi xi times times xi rightarrow is learned to evaluate whether any given fact is true In this work we focus on multiplicative models of link prediction specifically DistMult BIBREF2 because of its simplicity and popularity and ConvE BIBREF10 because of its high accuracy We can represent the scoring function of such methods as psi sro cdot where in d are embeddings of the subject relation and object respectively In DistMult odot where odot is elementwise multiplication operator Similarly in ConvE is computed by a convolution on the concatenation of and soin xi 0 We use the same setup as BIBREF10 for training ie incorporate binary crossentropy loss over the triple scores In particular for subjectrelation pairs sr in the training data G we use binary ysro to represent negative and positive facts Using the models probability of truth as sigma psi sro for langle srorangle the loss is defined as G sro ysrosro 1ysro1 sro Gradient descent is used to learn the embeddings and the parameters of if any
For adversarial modifications on KGs we first define the space of possible modifications For a target triple langle s r orangle we constrain the possible triples that we can remove or inject to be in the form of langle sprime rprime orangle ie sprime and rprime may be different from the target but the object is not We analyze other forms of modifications such as langle s rprime oprime rangle and langle s rprime orangle in appendices Modifications of the Form sr o langle s rprime oprime rangle and Modifications of the Form sr olangle s rprime o rangle and leave empirical evaluation of these modifications for future work For explaining a target prediction we are interested in identifying the observed fact that has the most influence according to the model on the prediction We define influence of an observed fact on the prediction as the change in the prediction score if the observed fact was not present when the embeddings were learned Previous work have used this concept of influence similarly for several different tasks BIBREF19 BIBREF20 Formally for the target triple sro and observed graph G we want to identify a neighboring triple sprime rprime oin G such that the score psi sro when trained on G and the score overlinepsi sro when trained on Glbrace sprime rprime orbrace are maximally different ie argmaxs rNeio srsro where Delta sprime rprime sropsi s r ooverlinepsi sro and textNeiolbrace sprime rprime langle sprime rprime o rangle in G rbrace We are also interested in investigating the robustness of models ie how sensitive are the predictions to small additions to the knowledge graph Specifically for a target prediction sro we are interested in identifying a single fake fact sprime rprime o that when added to the knowledge graph G changes the prediction score psi sro the most Using overlinepsi sro as the score after training on Gcup lbrace sprime rprime orbrace we define the adversary as argmaxs r srsro where Delta sprime rprime sropsi s r ooverlinepsi sro The search here is over any possible sprime in xi which is often in the millions for most realworld KGs and rprime in We also identify adversaries that increase the prediction score for specific false triple ie for a target fake fact sro the adversary is sprime rprime o0 where sprime rprime o1 is defined as before There are a number of crucial challenges when conducting such adversarial attack on KGs First evaluating the effect of changing the KG on the score of the target fact overlinepsi sro is expensive since we need to update the embeddings by retraining the model on the new graph a very timeconsuming process that is at least linear in the size of G Second since there are many candidate facts that can be added to the knowledge graph identifying the most promising adversary through searchbased methods is also expensive Specifically the search size for unobserved facts is xi times which for example in YAGO310 KG can be as many as 45 M possible facts for a single target prediction In this section we propose algorithms to address mentioned challenges by 1 approximating the effect of changing the graph on a target prediction and 2 using continuous optimization for the discrete search over potential modifications We first study the addition of a fact to the graph and then extend it to cover removal as well To capture the effect of an adversarial modification on the score of a target triple we need to study the effect of the change on the vector representations of the target triple We use and to denote the embeddings of sro at the solution of operatornamewithlimitsargmin G and when considering the adversarial triple langle sprime rprime o rangle we use and for the new embeddings of sro respectively Thus 0 is a solution to 1 which can also be written as 2 Similarly 3 s r o 4 5 6 7 o 8 9 0 1 2 3 On3 4 5 6 sros r o 7 8 s r 9 sro0 G Gs r o sro1 sro2 s r sro3 sro sro4 eo G0 sro5 eo G sro6 Ho sro7 dd sro8 o sro9 operatornamewithlimitsargmin G0 operatornamewithlimitsargmin G1 operatornamewithlimitsargmin G2 Ho operatornamewithlimitsargmin G3 Ho 1 srsr operatornamewithlimitsargmin G4 Ho operatornamewithlimitsargmin G5 dd operatornamewithlimitsargmin G6 d operatornamewithlimitsargmin G7 srsrd operatornamewithlimitsargmin G8 s r o operatornamewithlimitsargmin G9 s r o langle sprime rprime o rangle 0 langle sprime rprime o rangle 1 langle sprime rprime o rangle 2 Using the approximations provided in the previous section Eq and we can use brute force enumeration to find the adversary langle sprime rprime o rangle This approach is feasible when removing an observed triple since the search space of such modifications is usually small it is the number of observed facts that share the object with the target On the other hand finding the most influential unobserved fact to add requires search over a much larger space of all possible unobserved facts that share the object Instead we identify the most influential unobserved fact langle sprime rprime o rangle by using a gradientbased algorithm on vector sprime rprime in the embedding space reminder sprime rprime prime prime solving the following continuous optimization problem in d argmaxs r srsro After identifying the optimal sprime rprime we still need to generate the pair sprime rprime We design a network shown in Figure 2 that maps the vector sprime rprime to the entityrelation space ie translating it into sprime rprime In particular we train an autoencoder where the encoder is fixed to receive the s and langle sprime rprime o rangle 0 as onehot inputs and calculates langle sprime rprime o rangle 1 in the same way as the DistMult and ConvE encoders respectively using trained embeddings The decoder is trained to take langle sprime rprime o rangle 2 as input and produce langle sprime rprime o rangle 3 and langle sprime rprime o rangle 4 essentially inverting langle sprime rprime o rangle 5 s r langle sprime rprime o rangle 6 s langle sprime rprime o rangle 7 r langle sprime rprime o rangle 8 s r langle sprime rprime o rangle 9 We evaluate the performance of our inverter networks one for each modeldataset on correctly recovering the pairs of subject and relation from the test set of our benchmarks given the sprime rprime 0 The accuracy of recovered pairs and of each argument is given in Table 1 As shown our networks achieve a very high accuracy demonstrating their ability to invert vectors sprime rprime 1 to sprime rprime 2 pairs We evaluate by Influence Function vs comparing estimate with the actual effect of the attacks Robustness of Link Prediction Models studying the effect of adversarial attacks on evaluation metrics Interpretability of Models exploring its application to the interpretability of KG representations and Finding Errors in Knowledge Graphs detecting incorrect triples To evaluate the quality of our approximations and compare with influence function IF we conduct leave one out experiments In this setup we take all the neighbors of a random target triple as candidate modifications remove them one at a time retrain the model each time and compute the exact change in the score of the target triple We can use the magnitude of this change in score to rank the candidate triples and compare this exact ranking with ranking as predicted by influence function with and without Hessian matrix and the original model score with the intuition that facts that the model is most confident of will have the largest impact when removed Similarly we evaluate by considering 200 random triples that share the object entity with the target sample as candidates and rank them as above The average results of Spearmans rho and Kendalls tau rank correlation coefficients over 10 random target samples is provided in Table 3 performs comparably to the influence function confirming that our approximation is accurate Influence function is slightly more accurate because they use the complete Hessian matrix over all the parameters while we only approximate the change by calculating the Hessian over The effect of this difference on scalability is dramatic constraining IF to very small graphs and small embedding dimensionality dle 10 before we run out of memory In Figure 3 we show the time to compute a single adversary by IF compared to as we steadily grow the number of entities randomly chosen subgraphs averaged over 10 random triples As it shows is mostly unaffected by the number of entities while IF increases quadratically Considering that realworld KGs have tens of thousands of times more entities making IF unfeasible for them Now we evaluate the effectiveness of to successfully attack link prediction by adding false facts The goal here is to identify the attacks for triples in the test data and measuring their effect on MRR and Hits metrics ranking evaluations after conducting the attack and retraining the model Since this is the first work on adversarial attacks for link prediction we introduce several baselines to compare against our method For finding the adversarial fact to add for the target triple langle s r o rangle we consider two baselines 1 choosing a random fake fact langle sprime rprime o rangle Random Attack 2 finding sprime rprime by first calculating and then feeding to the decoder of the inverter function Opposite Attack In addition to we introduce two other alternatives of our method 1 that uses to increase the score of fake fact over a test triple ie we find the fake fact the model ranks second after the test triple and identify the adversary for them and 2 that selects between and attacks based on which has a higher estimated change in score AllTest The result of the attack on all test facts as targets is provided in the Table 4 outperforms the baselines demonstrating its ability to effectively attack the KG representations It seems DistMult is more robust against random attacks while ConvE is more robust against designed attacks is more effective than since changing the score of a fake fact is easier than of actual facts there is no existing evidence to support fake facts We also see that YAGO310 models are more robust than those for WN18 Looking at sample attacks provided in Appendix Sample Adversarial Attacks mostly tries to change the type of the target object by associating it with a subject and a relation for a different entity type UncertainTest To better understand the effect of attacks we consider a subset of test triples that 1 the model predicts correctly 2 difference between their scores and the negative sample with the highest score is minimum This UncertainTest subset contains 100 triples from each of the original test sets and we provide results of attacks on this data in Table 4 The attacks are much more effective in this scenario causing a considerable drop in the metrics Further in addition to significantly outperforming other baselines they indicate that ConvEs confidence is much more robust Relation Breakdown We perform additional analysis on the YAGO310 dataset to gain a deeper understanding of the performance of our model As shown in Figure 4 both DistMult and ConvE provide a more robust representation for isAffiliatedTo and isConnectedTo relations demonstrating the confidence of models in identifying them Moreover the affects DistMult more in playsFor and isMarriedTo relations while affecting ConvE more in isConnectedTo relations Examples Sample adversarial attacks are provided in Table 5 attacks mostly try to change the type of the target triples object by associating it with a subject and a relation that require a different entity types To be able to understand and interpret why a link is predicted using the opaque dense embeddings we need to find out which part of the graph was most influential on the prediction To provide such explanations for each predictions we identify the most influential fact using Instead of focusing on individual predictions we aggregate the explanations over the whole dataset for each relation using a simple rule extraction technique we find simple patterns on subgraphs that surround the target triple and the removed fact from and appear more than 90 of the time We only focus on extracting length2 horn rules ie R1acwedge R2cbRightarrow Rab where Rab is the target and R2cb is the removed fact Table 6 shows extracted YAGO310 rules that are common to both models and ones that are not The rules show several interesting inferences such that hasChild is often inferred via married parents and isLocatedIn via transitivity There are several differences in how the models reason as well DistMult often uses the hasCapital as an intermediate step for isLocatedIn while ConvE incorrectly uses isNeighbor We also compare against rules extracted by BIBREF2 for YAGO310 that utilizes the structure of DistMult they require domain knowledge on types and cannot be applied to ConvE Interestingly the extracted rules contain all the rules provided by demonstrating that can be used to accurately interpret models including ones that are not interpretable such as ConvE These are preliminary steps toward interpretability of link prediction models and we leave more analysis of interpretability to future work Here we demonstrate another potential use of adversarial modifications finding erroneous triples in the knowledge graph Intuitively if there is an error in the graph the triple is likely to be inconsistent with its neighborhood and thus the model should put least trust on this triple In other words the error triple should have the least influence on the models prediction of the training data Formally to find the incorrect triple langle sprime rprime orangle in the neighborhood of the train triple langle s r orangle we need to find the triple langle sprime rprime orangle that results in the least change Delta sprime rprime sro when removed from the graph To evaluate this application we inject random triples into the graph and measure the ability of to detect the errors using our optimization We consider two types of incorrect triples 1 incorrect triples in the form of langle sprime r orangle where sprime is chosen randomly from all of the entities and 2 incorrect triples in the form of langle sprime rprime orangle where sprime and rprime are chosen randomly We choose 100 random triples from the observed graph and for each of them add an incorrect triple in each of the two scenarios to its neighborhood Then after retraining DistMult on this noisy training data we identify error triples through a search over the neighbors of the 100 facts The result of choosing the neighbor with the least influence on the target is provided in the Table 7 When compared with baselines that randomly choose one of the neighbors or assume that the fact with the lowest score is incorrect we see that outperforms both of these with a considerable gap obtaining an accuracy of 42 and 55 in detecting errors Learning relational knowledge representations has been a focus of active research in the past few years but to the best of our knowledge this is the first work on conducting adversarial modifications on the link prediction task Knowledge graph embedding There is a rich literature on representing knowledge graphs in vector spaces that differ in their scoring functions BIBREF21 BIBREF22 BIBREF23 Although is primarily applicable to multiplicative scoring functions BIBREF0 BIBREF1 BIBREF2 BIBREF24 these ideas apply to additive scoring functions BIBREF18 BIBREF6 BIBREF7 BIBREF25 as well as we show in Appendix Firstorder Approximation of the Change For TransE Furthermore there is a growing body of literature that incorporates an extra types of evidence for more informed embeddings such as numerical values BIBREF26 images BIBREF27 text BIBREF28 BIBREF29 BIBREF30 and their combinations BIBREF31 Using we can gain a deeper understanding of these methods especially those that build their embeddings wit hmultiplicative scoring functions Interpretability and Adversarial Modification There has been a significant recent interest in conducting an adversarial attacks on different machine learning models BIBREF16 BIBREF32 BIBREF33 BIBREF34 BIBREF35 BIBREF36 to attain the interpretability and further evaluate the robustness of those models BIBREF20 uses influence function to provide an approach to understanding blackbox models by studying the changes in the loss occurring as a result of changes in the training data In addition to incorporating their established method on KGs we derive a novel approach that differs from their procedure in two ways 1 instead of changes in the loss we consider the changes in the scoring function which is more appropriate for KG representations and 2 in addition to searching for an attack we introduce a gradientbased method that is much faster especially for adding an attack triple the size of search space make the influence function method infeasible Previous work has also considered adversaries for KGs but as part of training to improve their representation of the graph BIBREF37 BIBREF38 Adversarial Attack on KG Although this is the first work on adversarial attacks for link prediction there are two approaches BIBREF39 BIBREF17 that consider the task of adversarial attack on graphs There are a few fundamental differences from our work 1 they build their method on top of a pathbased representations while we focus on embeddings 2 they consider node classification as the target of their attacks while we attack link prediction and 3 they conduct the attack on small graphs due to restricted scalability while the complexity of our method does not depend on the size of the graph but only the neighborhood allowing us to attack realworld graphs Motivated by the need to analyze the robustness and interpretability of link prediction models we present a novel approach for conducting adversarial modifications to knowledge graphs We introduce completion robustness and interpretability via adversarial graph edits identifying the fact to add into or remove from the KG that changes the prediction for a target fact uses 1 an estimate of the score change for any target triple after adding or removing another fact and 2 a gradientbased algorithm for identifying the most influential modification We show that can effectively reduce ranking metrics on link prediction models upon applying the attack triples Further we incorporate the to study the interpretability of KG representations by summarizing the most influential facts for each relation Finally using we introduce a novel automated error detection method for knowledge graphs We have release the opensource implementation of our models at httpspouyapezgithubiocriage We would like to thank Matt Gardner Marco Tulio Ribeiro Zhengli Zhao Robert L Logan IV Dheeru Dua and the anonymous reviewers for their detailed feedback and suggestions This work is supported in part by Allen Institute for Artificial Intelligence AI2 and in part by NSF awards IIS1817183 and IIS1756023 The views expressed are those of the authors and do not reflect the official policy or position of the funding agencies We approximate the change on the score of the target triple upon applying attacks other than the langle sprime rprime o rangle ones Since each relation appears many times in the training triples we can assume that applying a single attack will not considerably affect the relations embeddings As a result we just need to study the attacks in the form of langle s rprime o rangle and langle s rprime oprime rangle Defining the scoring function as psi sro cdot sr cdot we further assume that psi sro cdot cdot ro Using similar argument as the attacks in the form of langle sprime rprime o rangle we can calculate the effect of the attack overlinepsi sropsi s r o as sros r o s r where s r We now derive an efficient computation for First the derivative of the loss overlineG Glangle s rprime oprime rangle over is es G es G 1 r o where rprime oprime prime prime and varphi sigma psi srprime oprime At convergence after retraining we expect nabla es overlineG0 We perform first order Taylor approximation of nabla es overlineG to get 0 1ro Hs1ro ro where Hs is the dtimes d Hessian matrix for s ie second order derivative of the loss wrt computed sparsely Solving for gives us 1 Hs 1 roro1 ro In practice Hs is positive definite making Hs varphi 1varphi rprime oprime intercal rprime oprime positive definite as well and invertible Then we compute the score change as sros r o ro 1 Hs 1 roro1 roro In this section we approximate the effect of attack in the form of langle s rprime o rangle In contrast to langle sprime rprime o rangle attacks for this scenario we need to consider the change in the upon applying the attack in approximation of the change in the score as well Using previous results we can approximate the as 1 Ho 1 srsr1 sr and similarly we can approximate as 1 Hs 1 roro1 ro where Hs is the Hessian matrix over Then using these approximations sr sr 1 Ho 1 srsr1 sr and ro 1 Hs 1 roro1 ro ro and then calculate the change in the score as sros r o sr ro sr 1 Ho 1 srsr1 sr 1 Hs 1 roro1 ro r o In here we derive the approximation of the change in the score upon applying an adversarial modification for TransE BIBREF18 Using similar assumptions and parameters as before to calculate the effect of the attack overlinepsi sro where psi sro we need to compute To do so we need to derive an efficient computation for First the derivative of the loss overlineG Glangle sprime rprime o rangle over is eo G eo G 1 s rsro where sprime rprime prime prime and varphi sigma psi sprime rprime o At convergence after retraining we expect nabla eo overlineG0 We perform first order Taylor approximation of nabla eo overlineG to get 0 1 s rsroHo Hsro Hsro 1s rs rsro2 1sro1 s rs rsro3 where Ho is the dtimes d Hessian matrix for o ie second order derivative of the loss wrt computed sparsely Solving for gives us 1 Ho Hsro1 s rsro Then we compute the score change as sro 1 Ho Hsro1 s rsro Calculating this expression is efficient since Ho is a dtimes d matrix In this section we provide the output of the for some target triples Sample adversarial attacks are provided in Table 5 As it shows attacks mostly try to change the type of the target triples object by associating it with a subject and a relation that require a different entity types </s>
<s>[INST] Learning Supervised Topic Models for Classification and Regression from Crowds [/INST] Topic models such as latent Dirichlet allocation LDA allow us to analyze large collections of documents by revealing their underlying themes or topics and how each document exhibits them BIBREF0 Therefore it is not surprising that topic models have become a standard tool in data analysis with many applications that go even beyond their original purpose of modeling textual data such as analyzing images BIBREF1 BIBREF2 videos BIBREF3 survey data BIBREF4 or social networks data BIBREF5 Since documents are frequently associated with other variables such as labels tags or ratings much interest has been placed on supervised topic models BIBREF6 which allow the use of that extra information to guide the topics discovery By jointly learning the topics distributions and a classification or regression model supervised topic models have been shown to outperform the separate use of their unsupervised analogues together with an external regressionclassification algorithm BIBREF2 BIBREF7 Supervised topics models are then stateoftheart approaches for predicting target variables associated with complex highdimensional data such as documents or images Unfortunately the size of modern datasets makes the use of a single annotator unrealistic and unpractical for the majority of the realworld applications that involve some form of human labeling For instance the popular Reuters21578 benchmark corpus was categorized by a group of personnel from Reuters Ltd and Carnegie Group Inc Similarly the LabelMe project asks volunteers to annotate images from a large collection using an online tool Hence it is seldom the case where a single oracle labels an entire collection Furthermore the Web through its social nature also exploits the wisdom of crowds to annotate large collections of documents and images By categorizing texts tagging images or rating products and places Web users are generating large volumes of labeled content However when learning supervised models from crowds the quality of labels can vary significantly due to task subjectivity and differences in annotator reliability or bias BIBREF8 BIBREF9 If we consider a sentiment analysis task it becomes clear that the subjectiveness of the exercise is prone to generate considerably distinct labels from different annotators Similarly online product reviews are known to vary considerably depending on the personal biases and volatility of the reviewers opinions It is therefore essential to account for these issues when learning from this increasingly common type of data Hence the interest of researchers on building models that take the reliabilities of different annotators into consideration and mitigate the effect of their biases has spiked during the last few years eg BIBREF10 BIBREF11 The increasing popularity of crowdsourcing platforms like Amazon Mechanical Turk AMT has further contributed to the recent advances in learning from crowds This kind of platforms offers a fast scalable and inexpensive solution for labeling large amounts of data However their heterogeneous nature in terms of contributors makes their straightforward application prone to many sorts of labeling noise and bias Hence a careless use of crowdsourced data as training data risks generating flawed models In this article we propose a fully generative supervised topic model that is able to account for the different reliabilities of multiple annotators and correct their biases The proposed model is then capable of jointly modeling the words in documents as arising from a mixture of topics the latent true target variables as a result of the empirical distribution over topics of the documents and the labels of the multiple annotators as noisy versions of that latent ground truth We propose two different models one for classification BIBREF12 and another for regression problems thus covering a very wide range of possible practical applications as we empirically demonstrate Since the majority of the tasks for which multiple annotators are used generally involve complex data such as text images and video by developing a multiannotator supervised topic model we are contributing with a powerful tool for learning predictive models of complex highdimensional data from crowds Given that the increasing sizes of modern datasets can pose a problem for obtaining human labels as well as for Bayesian inference we propose an efficient stochastic variational inference algorithm BIBREF13 that is able to scale to very large datasets We empirically show using both simulated and real multipleannotator labels obtained from AMT for popular text and image collections that the proposed models are able to outperform other stateoftheart approaches in both classification and regression tasks We further show the computational and predictive advantages of the stochastic variational inference algorithm over its batch counterpart Latent Dirichlet allocation LDA soon proved to be a powerful tool for modeling documents BIBREF0 and images BIBREF1 by extracting their underlying topics where topics are probability distributions across words and each document is characterized by a probability distribution across topics However the need to model the relationship between documents and labels quickly gave rise to many supervised variants of LDA One of the first notable works was that of supervised LDA sLDA BIBREF6 By extending LDA through the inclusion of a response variable that is linearly dependent on the mean topicassignments of the words in a document sLDA is able to jointly model the documents and their responses in order to find latent topics that will best predict the response variables for future unlabeled documents Although initially developed for general continuous response variables sLDA was later extended to classification problems BIBREF2 by modeling the relationship between topicassignments and labels with a softmax function as in logistic regression From a classification perspective there are several ways in which document classes can be included in LDA The most natural one in this setting is probably the sLDA approach since the classes are directly dependent on the empirical topic mixture distributions This approach is coherent with the generative perspective of LDA but nevertheless several discriminative alternatives also exist For example DiscLDA BIBREF14 introduces a classdependent linear transformation on the topic mixture proportions of each document such that the perword topic assignments are drawn from linearly transformed mixture proportions The classspecific transformation matrices are then able to reposition the topic mixture proportions so that documents with the same class labels have similar topics mixture proportions The transformation matrices can be estimated by maximizing the conditional likelihood of response variables as the authors propose BIBREF14 An alternative way of including classes in LDA for supervision is the one proposed in the LabeledLDA model BIBREF15 LabeledLDA is a variant of LDA that incorporates supervision by constraining the topic model to assign to a document only topics that correspond to its label set While this allows for multiple labels per document it is restrictive in the sense that the number of topics needs to be the same as the number of possible labels From a regression perspective other than sLDA the most relevant approaches are the Dirichletmultimonial regression BIBREF16 and the inverse regression topic models BIBREF17 The Dirichletmultimonial regression DMR topic model BIBREF16 includes a loglinear prior on the documents mixture proportions that is a function of a set of arbitrary features such as author date publication venue or references in scientific articles The inferred Dirichletmultinomial distribution can then be used to make predictions about the values of theses features The inverse regression topic model IRTM BIBREF17 is a mixedmembership extension of the multinomial inverse regression MNIR model proposed in BIBREF18 that exploits the topical structure of text corpora to improve its predictions and facilitate exploratory data analysis However this results in a rather complex and inefficient inference procedure Furthermore making predictions in the IRTM is not trivial For example MAP estimates of targets will be in a different scale than the original documents metadata Hence the authors propose the use of a linear model to regress metadata values onto their MAP predictions The approaches discussed so far rely on likelihoodbased estimation procedures The work in BIBREF7 contrasts with these approaches by proposing MedLDA a supervised topic model that utilizes the maxmargin principle for estimation Despite its marginbased advantages MedLDA looses the probabilistic interpretation of the document classes given the topic mixture distributions On the contrary in this article we propose a fully generative probabilistic model of the answers of multiple annotators and of the words of documents arising from a mixture of topics Learning from multiple annotators is an increasingly important research topic Since the early work of Dawid and Skeene BIBREF19 who attempted to obtain point estimates of the error rates of patients given repeated but conflicting responses to various medical questions many approaches have been proposed These usually rely on latent variable models For example in BIBREF20 the authors propose a model to estimate the ground truth from the labels of multiple experts which is then used to train a classifier While earlier works usually focused on estimating the ground truth and the error rates of different annotators recent works are more focused on the problem of learning classifiers using multipleannotator data This idea was explored by Raykar et al BIBREF21 who proposed an approach for jointly learning the levels of expertise of different annotators and the parameters of a logistic regression classifier by modeling the ground truth labels as latent variables This work was later extended in BIBREF11 by considering the dependencies of the annotators labels on the instances they are labeling and also in BIBREF22 through the use of Gaussian process classifiers The model proposed in this article for classification problems shares the same intuition with this line of work and models the true labels as latent variables However it differs significantly by using a fully Bayesian approach for estimating the reliabilities and biases of the different annotators Furthermore it considers the problems of learning a lowdimensional representation of the input data through topic modeling and modeling the answers of multiple annotators jointly providing an efficient stochastic variational inference algorithm Despite the considerable amount of approaches for learning classifiers from the noisy answers of multiple annotators for continuous response variables this problem has been approached in a much smaller extent For example Groot et al BIBREF23 address this problem in the context of Gaussian processes In their work the authors assign a different variance to the likelihood of the data points provided by the different annotators thereby allowing them to have different noise levels which can be estimated by maximizing the marginal likelihood of the data Similarly the authors in BIBREF21 propose an extension of their own classification approach to regression problems by assigning different variances to the Gaussian noise models of the different annotators In this article we take this idea one step further by also considering a perannotator bias parameter which gives the proposed model the ability to overcome certain personal tendencies in the annotators labeling styles that are quite common for example in product ratings and document reviews Furthermore we empirically validate the proposed model using real multiannotator data obtained from Amazon Mechanical Turk This contrasts with the previously mentioned works which rely only on simulated annotators In this section we develop a multiannotator supervised topic model for classification problems The model for regression settings will be presented in Section SECREF5 We start by deriving a batch variational inference algorithm for approximating the posterior distribution over the latent variables and an algorithm to estimate the model parameters We then develop a stochastic variational inference algorithm that gives the model the capability of handling large collections of documents Finally we show how to use the learned model to classify new documents Let INLINEFORM0 be an annotated corpus of size INLINEFORM1 where each document INLINEFORM2 is given a set of labels INLINEFORM3 from INLINEFORM4 distinct annotators We can take advantage of the inherent topical structure of documents and model their words as arising from a mixture of topics each being defined as a distribution over the words in a vocabulary as in LDA In LDA the INLINEFORM5 word INLINEFORM6 in a document INLINEFORM7 is provided a discrete topicassignment INLINEFORM8 which is drawn from the documents distribution over topics INLINEFORM9 This allows us to build lowerdimensional representations of documents which we can explore to build classification models by assigning coefficients INLINEFORM10 to the mean topicassignment of the words in the document INLINEFORM11 and applying a softmax function in order to obtain a distribution over classes Alternatively one could consider more flexible models such as Gaussian processes however that would considerably increase the complexity of inference Unfortunately a direct mapping between document classes and the labels provided by the different annotators in a multipleannotator setting would correspond to assuming that they are all equally reliable an assumption that is violated in practice as previous works clearly demonstrate eg BIBREF8 BIBREF9 Hence we assume the existence of a latent ground truth class and model the labels from the different annotators using a noise model that states that given a true class INLINEFORM0 each annotator INLINEFORM1 provides the label INLINEFORM2 with some probability INLINEFORM3 Hence by modeling the matrix INLINEFORM4 we are in fact modeling a perannotator normalized confusion matrix which allows us to account for their different levels of expertise and correct their potential biases The generative process of the proposed model for classification problems can then be summarized as follows For each annotator INLINEFORM0 For each class INLINEFORM0 Draw reliability parameter INLINEFORM0 For each topic INLINEFORM0 Draw topic distribution INLINEFORM0 For each document INLINEFORM0 Draw topic proportions INLINEFORM0 For the INLINEFORM0 word Draw topic assignment INLINEFORM0 Draw word INLINEFORM0 Draw latent true class INLINEFORM0 For each annotator INLINEFORM0 Draw annotators label INLINEFORM0 where INLINEFORM0 denotes the set of annotators that labeled the INLINEFORM1 document INLINEFORM2 and the softmax is given by DISPLAYFORM0 Fig FIGREF20 shows a graphical model representation of the proposed model where INLINEFORM0 denotes the number of topics INLINEFORM1 is the number of classes INLINEFORM2 is the total number of annotators and INLINEFORM3 is the number of words in the document INLINEFORM4 Shaded nodes are used to distinguish latent variable from the observed ones and small solid circles are used to denote model parameters Notice that we included a Dirichlet prior over the topics INLINEFORM5 to produce a smooth posterior and control sparsity Similarly instead of computing maximum likelihood or MAP estimates for the annotators reliability parameters INLINEFORM6 we place a Dirichlet prior over these variables and perform approximate Bayesian inference This contrasts with previous works on learning classification models from crowds BIBREF21 BIBREF24 For developing a multiannotator supervised topic model for regression we shall follow a similar intuition as the one we considered for classification Namely we shall assume that for a given document INLINEFORM0 each annotator provides a noisy version INLINEFORM1 of the true continuous target variable which we denote by INLINEFORM2 This can be for example the true rating of a product or the true sentiment of a document Assuming that each annotator INLINEFORM3 has its own personal bias INLINEFORM4 and precision INLINEFORM5 inverse variance and assuming a Gaussian noise model for the annotators answers we have that DISPLAYFORM0 This approach is therefore more powerful than previous works BIBREF21 BIBREF23 where a single precision parameter was used to model the annotators expertise Fig FIGREF45 illustrates this intuition for 4 annotators represented by different colors The green annotator is the best one since he is right on the target and his answers vary very little low bias high precision The yellow annotator has a low bias but his answers are very uncertain as they can vary a lot Contrarily the blue annotator is very precise but consistently overestimates the true target high bias high precision Finally the red annotator corresponds to the worst kind of annotator with high bias and low precision Having specified a model for annotators answers given the true targets the only thing left is to do is to specify a model of the latent true targets INLINEFORM0 given the empirical topic mixture distributions INLINEFORM1 For this we shall keep things simple and assume a linear model as in sLDA BIBREF6 The generative process of the proposed model for continuous target variables can then be summarized as follows For each annotator INLINEFORM0 For each class INLINEFORM0 Draw reliability parameter INLINEFORM0 For each topic INLINEFORM0 Draw topic distribution INLINEFORM0 For each document INLINEFORM0 Draw topic proportions INLINEFORM0 For the INLINEFORM0 word Draw topic assignment INLINEFORM0 Draw word INLINEFORM0 Draw latent true target INLINEFORM0 For each annotator INLINEFORM0 Draw answer INLINEFORM0 Fig FIGREF60 shows a graphical representation of the proposed model Given a dataset INLINEFORM0 the goal of inference is to compute the posterior distribution of the perdocument topic proportions INLINEFORM1 the perword topic assignments INLINEFORM2 the pertopic distribution over words INLINEFORM3 the perdocument latent true class INLINEFORM4 and the perannotator confusion parameters INLINEFORM5 As with LDA computing the exact posterior distribution of the latent variables is computationally intractable Hence we employ meanfield variational inference to perform approximate Bayesian inference Variational inference methods seek to minimize the KL divergence between the variational and the true posterior distribution We assume a fullyfactorized meanfield variational distribution of the form DISPLAYFORM0 where INLINEFORM0 INLINEFORM1 INLINEFORM2 INLINEFORM3 and INLINEFORM4 are variational parameters Table TABREF23 shows the correspondence between variational parameters and the original parameters Let INLINEFORM0 denote the model parameters Following BIBREF25 the KL minimization can be equivalently formulated as maximizing the following lower bound on the log marginal likelihood DISPLAYFORM0 which we maximize using coordinate ascent Optimizing INLINEFORM0 wrt INLINEFORM1 and INLINEFORM2 gives the same coordinate ascent updates as in LDA BIBREF0 DISPLAYFORM0 The variational Dirichlet parameters INLINEFORM0 can be optimized by collecting only the terms in INLINEFORM1 that contain INLINEFORM2 DISPLAYFORM0 where INLINEFORM0 denotes the documents labeled by the INLINEFORM1 annotator INLINEFORM2 and INLINEFORM3 and INLINEFORM4 are the gamma and digamma functions respectively Taking derivatives of INLINEFORM5 wrt INLINEFORM6 and setting them to zero yields the following update DISPLAYFORM0 Similarly the coordinate ascent updates for the documents distribution over classes INLINEFORM0 can be found by considering the terms in INLINEFORM1 that contain INLINEFORM2 DISPLAYFORM0 where INLINEFORM0 Adding the necessary Lagrange multipliers to ensure that INLINEFORM1 and setting the derivatives wrt INLINEFORM2 to zero gives the following update DISPLAYFORM0 Observe how the variational distribution over the true classes results from a combination between the dot product of the inferred mean topic assignment INLINEFORM0 with the coefficients INLINEFORM1 and the labels INLINEFORM2 from the multiple annotators weighted by their expected log probability INLINEFORM3 The main difficulty of applying standard variational inference methods to the proposed model is the nonconjugacy between the distribution of the mean topicassignment INLINEFORM0 and the softmax Namely in the expectation DISPLAYFORM0 the second term is intractable to compute We can make progress by applying Jensens inequality to bound it as follows DISPLAYFORM0 where INLINEFORM0 which is constant wrt INLINEFORM1 This local variational bound can be made tight by noticing that INLINEFORM2 where equality holds if and only if INLINEFORM3 Hence given the current parameter estimates INLINEFORM4 if we set INLINEFORM5 and INLINEFORM6 then for an individual parameter INLINEFORM7 we have that DISPLAYFORM0 Using this local bound to approximate the expectation of the logsumexp term and taking derivatives of the evidence lower bound wrt INLINEFORM0 with the constraint that INLINEFORM1 yields the following fixpoint update DISPLAYFORM0 where INLINEFORM0 denotes the size of the vocabulary Notice how the perword variational distribution over topics INLINEFORM1 depends on the variational distribution over the true class label INLINEFORM2 The variational inference algorithm iterates between Eqs EQREF25 EQREF33 until the evidence lower bound Eq EQREF24 converges Additional details are provided as supplementary material The goal of inference is to compute the posterior distribution of the perdocument topic proportions INLINEFORM0 the perword topic assignments INLINEFORM1 the pertopic distribution over words INLINEFORM2 and the perdocument latent true targets INLINEFORM3 As we did for the classification model we shall develop a variational inference algorithm using coordinate ascent The lowerbound on the log marginal likelihood is now given by DISPLAYFORM0 where INLINEFORM0 are the model parameters We assume a fullyfactorized meanfield variational distribution INLINEFORM1 of the form DISPLAYFORM0 where INLINEFORM0 INLINEFORM1 INLINEFORM2 INLINEFORM3 and INLINEFORM4 are the variational parameters Notice the new Gaussian term INLINEFORM5 corresponding to the approximate posterior distribution of the unobserved true targets Optimizing the variational objective INLINEFORM0 wrt INLINEFORM1 and INLINEFORM2 yields the same updates from Eqs EQREF25 and Optimizing wrt INLINEFORM3 gives a similar update to the one in sLDA BIBREF6 DISPLAYFORM0 where we defined INLINEFORM0 Notice how this update differs only from the one in BIBREF6 by replacing the true target variable by its expected value under the variational distribution which is given by INLINEFORM1 The only variables left for doing inference on are then the latent true targets INLINEFORM0 The variational distribution of INLINEFORM1 is governed by two parameters a mean INLINEFORM2 and a variance INLINEFORM3 Collecting all the terms in INLINEFORM4 that contain INLINEFORM5 gives DISPLAYFORM0 Taking derivatives of INLINEFORM0 and setting them to zero gives the following update for INLINEFORM1 DISPLAYFORM0 Notice how the value of INLINEFORM0 is a weighted average of what the linear regression model on the empirical topic mixture believes the true target should be and the biascorrected answers of the different annotators weighted by their individual precisions As for INLINEFORM0 we can optimize INLINEFORM1 wrt INLINEFORM2 by collecting all terms that contain INLINEFORM3 DISPLAYFORM0 and taking derivatives yielding the update DISPLAYFORM0 The model parameters are INLINEFORM0 The parameters INLINEFORM1 of the Dirichlet priors can be regarded as hyperparameters of the proposed model As with many works on topic models eg BIBREF26 BIBREF2 we assume hyperparameters to be fixed since they can be effectively selected by gridsearch procedures which are able to explore well the parameter space without suffering from local optima Our focus is then on estimating the coefficients INLINEFORM2 using a variational EM algorithm Therefore in the Estep we use the variational inference algorithm from section SECREF21 to estimate the posterior distribution of the latent variables and in the Mstep we find maximum likelihood estimates of INLINEFORM3 by maximizing the evidence lower bound INLINEFORM4 Unfortunately taking derivatives of INLINEFORM5 wrt INLINEFORM6 does not yield a closedform solution Hence we use a numerical method namely LBFGS BIBREF27 to find an optimum The objective function and gradients are given by DISPLAYFORM0 where for convenience we defined the following variable INLINEFORM0 The parameters of the proposed regression model are INLINEFORM0 As we did for the classification model we shall assume the Dirichlet parameters INLINEFORM1 and INLINEFORM2 to be fixed Similarly we shall assume that the variance of the true targets INLINEFORM3 to be constant The only parameters left to estimate are then the regression coefficients INLINEFORM4 and the annotators biases INLINEFORM5 and precisions INLINEFORM6 which we estimate using variational Bayesian EM Since the latent true targets are now linear functions of the documents empirical topic mixtures ie there is no softmax function we can find a closed form solution for the regression coefficients INLINEFORM0 Taking derivatives of INLINEFORM1 wrt INLINEFORM2 and setting them to zero gives the following solution for INLINEFORM3 DISPLAYFORM0 where DISPLAYFORM0 We can find maximum likelihood estimates for the annotator biases INLINEFORM0 by optimizing the lower bound on the marginal likelihood The terms in INLINEFORM1 that involve INLINEFORM2 are DISPLAYFORM0 Taking derivatives wrt INLINEFORM0 gives the following estimate for the bias of the INLINEFORM1 annotator DISPLAYFORM0 Similarly we can find maximum likelihood estimates for the precisions INLINEFORM0 of the different annotators by considering the terms in INLINEFORM1 that contain INLINEFORM2 DISPLAYFORM0 The maximum likelihood estimate for the precision inverse variance of the INLINEFORM0 annotator is then given by DISPLAYFORM0 Given a set of fitted parameters it is then straightforward to make predictions for new documents it is just necessary to infer the approximate posterior distribution over the wordtopic assignments INLINEFORM0 for all the words using the coordinates ascent updates of standard LDA Eqs EQREF25 and EQREF42 and then use the mean topic assignments INLINEFORM1 to make predictions INLINEFORM2 In Section SECREF21 we proposed a batch coordinate ascent algorithm for doing variational inference in the proposed model This algorithm iterates between analyzing every document in the corpus to infer the local hidden structure and estimating the global hidden variables However this can be inefficient for large datasets since it requires a full pass through the data at each iteration before updating the global variables In this section we develop a stochastic variational inference algorithm BIBREF13 which follows noisy estimates of the gradients of the evidence lower bound INLINEFORM0 Based on the theory of stochastic optimization BIBREF28 we can find unbiased estimates of the gradients by subsampling a document or a minibatch of documents from the corpus and using it to compute the gradients as if that document was observed INLINEFORM0 times Hence given an uniformly sampled document INLINEFORM1 we use the current posterior distributions of the global latent variables INLINEFORM2 and INLINEFORM3 and the current coefficient estimates INLINEFORM4 to compute the posterior distribution over the local hidden variables INLINEFORM5 INLINEFORM6 and INLINEFORM7 using Eqs EQREF25 EQREF33 and EQREF29 respectively These posteriors are then used to update the global variational parameters INLINEFORM8 and INLINEFORM9 by taking a step of size INLINEFORM10 in the direction of the noisy estimates of the natural gradients Algorithm SECREF37 describes a stochastic variational inference algorithm for the proposed model Given an appropriate schedule for the learning rates INLINEFORM0 such that INLINEFORM1 and INLINEFORM2 the stochastic optimization algorithm is guaranteed to converge to a local maximum of the evidence lower bound BIBREF28 t Stochastic variational inference for the proposed classification model 1 Initialize INLINEFORM0 INLINEFORM1 INLINEFORM2 INLINEFORM3 INLINEFORM4 INLINEFORM5 Set t t 1 Sample a document INLINEFORM6 uniformly from the corpus Compute INLINEFORM7 using Eq EQREF33 for INLINEFORM8 Compute INLINEFORM9 using Eq EQREF25 Compute INLINEFORM10 using Eq EQREF29 local parameters INLINEFORM11 INLINEFORM12 and INLINEFORM13 converge Compute stepsize INLINEFORM14 Update topics variational parameters DISPLAYFORM0 Update annotators confusion parameters DISPLAYFORM0 global convergence criterion is met As we did for the classification model from Section SECREF4 we can envision developing a stochastic variational inference for the proposed regression model In this case the only global latent variables are the pertopic distributions over words INLINEFORM0 As for the local latent variables instead of a single variable INLINEFORM1 we now have two variables perdocument INLINEFORM2 and INLINEFORM3 The stochastic variational inference can then be summarized as shown in Algorithm SECREF76 For added efficiency one can also perform stochastic updates of the annotators biases INLINEFORM4 and precisions INLINEFORM5 by taking a step in the direction of the gradient of the noisy evidence lower bound scaled by the stepsize INLINEFORM6 t Stochastic variational inference for the proposed regression model 1 Initialize INLINEFORM0 INLINEFORM1 INLINEFORM2 INLINEFORM3 INLINEFORM4 INLINEFORM5 INLINEFORM6 Set t t 1 Sample a document INLINEFORM7 uniformly from the corpus Compute INLINEFORM8 using Eq EQREF64 for INLINEFORM9 Compute INLINEFORM10 using Eq EQREF25 Compute INLINEFORM11 using Eq EQREF66 Compute INLINEFORM12 using Eq EQREF68 local parameters INLINEFORM13 INLINEFORM14 and INLINEFORM15 converge Compute stepsize INLINEFORM16 Update topics variational parameters DISPLAYFORM0 global convergence criterion is met In order to make predictions for a new unlabeled document INLINEFORM0 we start by computing the approximate posterior distribution over the latent variables INLINEFORM1 and INLINEFORM2 This can be achieved by dropping the terms that involve INLINEFORM3 INLINEFORM4 and INLINEFORM5 from the models joint distribution since at prediction time the multiannotator labels are no longer observed and averaging over the estimated topics distributions Letting the topics distribution over words inferred during training be INLINEFORM6 the joint distribution for a single document is now simply given by DISPLAYFORM0 Deriving a meanfield variational inference algorithm for computing the posterior over INLINEFORM0 results in the same fixedpoint updates as in LDA BIBREF0 for INLINEFORM1 Eq EQREF25 and INLINEFORM2 DISPLAYFORM0 Using the inferred posteriors and the coefficients INLINEFORM0 estimated during training we can make predictions as follows DISPLAYFORM0 This is equivalent to making predictions in the classification version of sLDA BIBREF2 In this section we develop a variant of the model proposed in Section SECREF4 for regression problems We shall start by describing the proposed model with a special focus on the how to handle multiple annotators with different biases and reliabilities when the target variables are continuous variables Next we present a variational inference algorithm highlighting the differences to the classification version Finally we show how to optimize the model parameters In this section the proposed multiannotator supervised LDA models for classification and regression MAsLDAc and MAsLDAr respectively are validated using both simulated annotators on popular corpora and using real multipleannotator labels obtained from Amazon Mechanical Turk Namely we shall consider the following realworld problems classifying posts and news stories classifying images according to their content predicting number of stars that a given user gave to a restaurant based on the review predicting movie ratings using the text of the reviews In order to first validate the proposed model for classification problems in a slightly more controlled environment the wellknown 20Newsgroups benchmark corpus BIBREF29 was used by simulating multiple annotators with different levels of expertise The 20Newsgroups consists of twenty thousand messages taken from twenty newsgroups and is divided in six superclasses which are in turn partitioned in several subclasses For this first set of experiments only the four most populated superclasses were used computers science politics and recreative The preprocessing of the documents consisted of stemming and stopwords removal After that 75 of the documents were randomly selected for training and the remaining 25 for testing The different annotators were simulated by sampling their answers from a multinomial distribution where the parameters are given by the lines of the annotators confusion matrices Hence for each annotator INLINEFORM0 we start by predefining a confusion matrix INLINEFORM1 with elements INLINEFORM2 which correspond to the probability that the annotators answer is INLINEFORM3 given that the true label is INLINEFORM4 INLINEFORM5 Then the answers are sampled iid from INLINEFORM6 This procedure was used to simulate 5 different annotators with the following accuracies 0737 0468 0284 0278 0260 In this experiment no repeated labelling was used Hence each annotator only labels roughly onefifth of the data When compared to the ground truth the simulated answers revealed an accuracy of 0405 See Table TABREF81 for an overview of the details of the classification datasets used Both the batch and the stochastic variational inference svi versions of the proposed model MAsLDAc are compared with the following baselines itemsep002cm LDA LogReg mv This baseline corresponds to applying unsupervised LDA to the data and learning a logistic regression classifier on the inferred topics distributions of the documents The labels from the different annotators were aggregated using majority voting mv Notice that when there is a single annotator label per instance majority voting is equivalent to using that label for training This is the case of the 20Newsgroups simulated annotators but the same does not apply for the experiments in Section UID89 LDA Raykar For this baseline the model of BIBREF21 was applied using the documents topic distributions inferred by LDA as features LDA Rodrigues This baseline is similar to the previous one but uses the model of BIBREF9 instead Blei 2003 mv The idea of this baseline is to replicate a popular stateoftheart approach for document classification Hence the approach of BIBREF0 was used It consists of applying LDA to extract the documents topics distributions which are then used to train a SVM Similarly to the previous approach the labels from the different annotators were aggregated using majority voting mv sLDA mv This corresponds to using the classification version of sLDA BIBREF2 with the labels obtained by performing majority voting mv on the annotators answers For all the experiments the hyperparameters INLINEFORM0 INLINEFORM1 and INLINEFORM2 were set using a simple grid search in the collection INLINEFORM3 The same approach was used to optimize the hyperparameters of the all the baselines For the svi algorithm different minibatch sizes and forgetting rates INLINEFORM4 were tested For the 20Newsgroup dataset the best results were obtained with a minibatch size of 500 and INLINEFORM5 The INLINEFORM6 was kept at 1 The results are shown in Fig FIGREF87 for different numbers of topics where we can see that the proposed model outperforms all the baselines being the svi version the one that performs best In order to assess the computational advantages of the stochastic variational inference svi over the batch algorithm the log marginal likelihood or log evidence was plotted against the number of iterations Fig FIGREF88 shows this comparison Not surprisingly the svi version converges much faster to higher values of the log marginal likelihood when compared to the batch version which reflects the efficiency of the svi algorithm In order to validate the proposed classification model in real crowdsourcing settings Amazon Mechanical Turk AMT was used to obtain labels from multiple annotators for two popular datasets Reuters21578 BIBREF30 and LabelMe BIBREF31 The Reuters21578 is a collection of manually categorized newswire stories with labels such as Acquisitions Crudeoil Earnings or Grain For this experiment only the documents belonging to the ModApte split were considered with the additional constraint that the documents should have no more than one label This resulted in a total of 7016 documents distributed among 8 classes Of these 1800 documents were submitted to AMT for multiple annotators to label giving an average of approximately 3 answers per document see Table TABREF81 for further details The remaining 5216 documents were used for testing The collected answers yield an average worker accuracy of 568 Applying majority voting to these answers reveals a ground truth accuracy of 710 Fig FIGREF90 shows the boxplots of the number of answers per worker and their accuracies Observe how applying majority voting yields a higher accuracy than the median accuracy of the workers The results obtained by the different approaches are given in Fig FIGREF91 where it can be seen that the proposed model MAsLDAc outperforms all the other approaches For this dataset the svi algorithm is using minibatches of 300 documents The proposed model was also validated using a dataset from the computer vision domain LabelMe BIBREF31 In contrast to the Reuters and Newsgroups corpora LabelMe is an open online tool to annotate images Hence this experiment allows us to see how the proposed model generalizes beyond nontextual data Using the Matlab interface provided in the projects website we extracted a subset of the LabelMe data consisting of all the 256 x 256 images with the categories highway inside city tall building street forest coast mountain or open country This allowed us to collect a total of 2688 labeled images Of these 1000 images were given to AMT workers to classify with one of the classes above Each image was labeled by an average of 2547 workers with a mean accuracy of 692 When majority voting is applied to the collected answers a ground truth accuracy of 769 is obtained Fig FIGREF92 shows the boxplots of the number of answers per worker and their accuracies Interestingly the worker accuracies are much higher and their distribution is much more concentrated than on the Reuters21578 data see Fig FIGREF90 which suggests that this is an easier task for the AMT workers The preprocessing of the images used is similar to the approach in BIBREF1 It uses 128dimensional SIFT BIBREF32 region descriptors selected by a sliding grid spaced at one pixel This sliding grid extracts local regions of the image with sizes uniformly sampled between 16 x 16 and 32 x 32 pixels The 128dimensional SIFT descriptors produced by the sliding window are then fed to a kmeans algorithm with k200 in order construct a vocabulary of 200 visual words This allows us to represent the images with a bag of visual words model With the purpose of comparing the proposed model with a popular stateoftheart approach for image classification for the LabelMe dataset the following baseline was introduced Bosch 2006 mv This baseline is similar to one in BIBREF33 The authors propose the use of pLSA to extract the latent topics and the use of knearest neighbor kNN classifier using the documents topics distributions For this baseline unsupervised LDA is used instead of pLSA and the labels from the different annotators for kNN with INLINEFORM0 are aggregated using majority voting mv The results obtained by the different approaches for the LabelMe data are shown in Fig FIGREF94 where the svi version is using minibatches of 200 documents Analyzing the results for the Reuters21578 and LabelMe data we can observe that MAsLDAc outperforms all the baselines with slightly better accuracies for the batch version especially in the Reuters data Interestingly the second best results are consistently obtained by the multiannotator approaches which highlights the need for accounting for the noise and biases of the answers of the different annotators In order to verify that the proposed model was estimating the normalized confusion matrices INLINEFORM0 of the different workers correctly a random sample of them was plotted against the true confusion matrices ie the normalized confusion matrices evaluated against the true labels Figure FIGREF95 shows the results obtained with 60 topics on the Reuters21578 dataset where the color intensity of the cells increases with the magnitude of the value of INLINEFORM1 the supplementary material provides a similar figure for the LabelMe dataset Using this visualization we can verify that the AMT workers are quite heterogeneous in their labeling styles and in the kind of mistakes they make with several workers showing clear biases eg workers 3 and 4 while others made mistakes more randomly eg worker 1 Nevertheless the proposed is able to capture these patterns correctly and account for effect To gain further insights Table TABREF96 shows 4 example images from the LabelMe dataset along with their true labels the answers provided by the different workers the true label inferred by the proposed model and the likelihood of the different possible answers given the true label for each annotator INLINEFORM0 for INLINEFORM1 using a colorcoding scheme similar to Fig FIGREF95 In the first example although majority voting suggests inside city to be the correct label we can see that the model has learned that annotators 32 and 43 are very likely to provide the label inside city when the true label is actually street and it is able to leverage that fact to infer that the correct label is street Similarly in the second image the model is able to infer the correct true label from 3 conflicting labels However in the third image the model is not able to recover the correct true class which can be explained by it not having enough evidence about the annotators and their reliabilities and biases likelihood distribution for these cases is uniform In fact this raises interesting questions regarding requirements for the minimum number of labels per annotator their reliabilities and their coherence Finally for the fourth image somehow surprisingly the model is able to infer the correct true class even though all 3 annotators labeled it as inside city As for proposed classification model we start by validating MAsLDAr using simulated annotators on a popular corpus where the documents have associated targets that we wish to predict For this purpose we shall consider a dataset of usersubmitted restaurant reviews from the website we8therecom This dataset was originally introduced in BIBREF34 and it consists of 6260 reviews For each review there is a fivestar rating on four specific aspects of quality food service value and atmosphere as well as the overall experience Our goal is then to predict the overall experience of the user based on his comments in the review We apply the same preprocessing as in BIBREF18 which consists in tokenizing the text into bigrams and discarding those that appear in less than ten reviews The preprocessing of the documents consisted of stemming and stopwords removal After that 75 of the documents were randomly selected for training and the remaining 25 for testing As with the classification model we seek to simulate an heterogeneous set of annotators in terms of reliability and bias Hence in order to simulate an annotator INLINEFORM0 we proceed as follows let INLINEFORM1 be the true review of the restaurant we start by assigning a given bias INLINEFORM2 and precision INLINEFORM3 to the reviewers depending on what type of annotator we wish to simulate see Fig FIGREF45 we then sample a simulated answer as INLINEFORM4 Using this procedure we simulated 5 annotators with the following bias precision pairs 01 10 03 3 25 10 01 05 and 1 025 The goal is to have 2 good annotators low bias high precision 1 highly biased annotator and 2 low precision annotators where one is unbiased and the other is reasonably biased The coefficients of determination INLINEFORM5 of the simulated annotators are 0940 0785 2469 0131 1749 Computing the mean of the answers of the different annotators yields a INLINEFORM6 of 0798 Table TABREF99 gives an overview on the statistics of datasets used in the regression experiments We compare the proposed model MAsLDAr with the two following baselines itemsep002cm LDA LinReg mean This baseline corresponds to applying unsupervised LDA to the data and learning a linear regression model on the inferred topics distributions of the documents The answers from the different annotators were aggregated by computing the mean sLDA mean This corresponds to using the regression version of sLDA BIBREF6 with the target variables obtained by computing the mean of the annotators answers Fig FIGREF102 shows the results obtained for different numbers of topics Do to the stochastic nature of both the annotators simulation procedure and the initialization of the variational Bayesian EM algorithm we repeated each experiment 30 times and report the average INLINEFORM0 obtained with the corresponding standard deviation Since the regression datasets that are considered in this article are not large enough to justify the use of a stochastic variational inference svi algorithm we only made experiments using the batch algorithm developed in Section SECREF61 The results obtained clearly show the improved performance of MAsLDAr over the other methods The proposed multiannotator regression model MAsLDAr was also validated with real annotators by using AMT For that purpose the movie review dataset from BIBREF35 was used This dataset consists of 5006 movie reviews along with their respective star rating from 1 to 10 The goal of this experiment is then predict how much a person liked a movie based on what she says about it We ask workers to guess how much they think the writer of the review liked the movie based on her comments An average of 496 answers perreview was collected for a total of 1500 reviews The remaining reviews were used for testing In average each worker rated approximately 55 reviews Using the mean answer as an estimate of the true rating of the movie yields a INLINEFORM0 of 0830 Table TABREF99 gives an overview of the statistics of this data Fig FIGREF104 shows boxplots of the number of answers per worker as well as boxplots of their respective biases INLINEFORM1 and variances inverse precisions INLINEFORM2 The preprocessing of the text consisted of stemming and stopwords removal Using the preprocessed data the proposed MAsLDAr model was compared with the same baselines that were used with the we8there dataset in Section UID98 Fig FIGREF105 shows the results obtained for different numbers of topics These results show that the proposed model outperforms all the other baselines With the purpose of verifying that the proposed model is indeed estimating the biases and precisions of the different workers correctly we plotted the true values against the estimates of MAsLDAr with 60 topics for a random subset of 10 workers Fig FIGREF106 shows the obtained results where higher color intensities indicate higher values Ideally the colour of two horizontallyadjacent squares would then be of similar shades and this is indeed what happens in practice for the majority of the workers as Fig FIGREF106 shows Interestingly the figure also shows that there are a couple of workers that are considerably biased eg workers 6 and 8 and that those biases are being correctly estimated thus justifying the inclusion of a bias parameter in the proposed model which contrasts with previous works BIBREF21 BIBREF23 This article proposed a supervised topic model that is able to learn from multiple annotators and crowds by accounting for their biases and different levels of expertise Given the large sizes of modern datasets and considering that the majority of the tasks for which crowdsourcing and multiple annotators are desirable candidates generally involve complex highdimensional data such as text and images the proposed model constitutes a strong contribution for the multiannotator paradigm This model is then capable of jointly modeling the words in documents as arising from a mixture of topics as well as the latent true target variables and the noisy answers of the multiple annotators We developed two distinct models one for classification and another for regression which share similar intuitions but that inevitably differ due to the nature of the target variables We empirically showed using both simulated and real annotators from Amazon Mechanical Turk that the proposed model is able to outperform stateoftheart approaches in several realworld problems such as classifying posts news stories and images or predicting the number of stars of restaurant and the rating of movie based on their reviews For this we use various popular datasets from the stateoftheart that are commonly used for benchmarking machine learning algorithms Finally an efficient stochastic variational inference algorithm was described which gives the proposed models the ability to scale to large datasets The Fundao para a Cincia e Tecnologia FCT is gratefully acknowledged for founding this work with the grants SFRHBD783962011 and PTDCECMTRA18982012 InfoCROWDS Mariana Loureno has a MSc degree in Informatics Engineering from University of Coimbra Portugal Her thesis presented a supervised topic model that is able to learn from crowds and she took part in a research project whose primary objective was to exploit online information about public events to build predictive models of flows of people in the city Her main research interests are machine learning pattern recognition and natural language processing Bernardete Ribeiro is Associate Professor at the Informatics Engineering Department University of Coimbra in Portugal from where she received a DSc in Informatics Engineering a PhD in Electrical Engineering speciality of Informatics and a MSc in Computer Science Her research interests are in the areas of Machine Learning Pattern Recognition and Signal Processing and their applications to a broad range of fields She was responsibleparticipated in several research projects in a wide range of application areas such as Text Classification Financial Biomedical and Bioinformatics Bernardete Ribeiro is IEEE Senior Member and member of IARP International Association of Pattern Recognition and ACM Francisco C Pereira is Full Professor at the Technical University of Denmark DTU where he leads the Smart Mobility research group His main research focus is on applying machine learning and pattern recognition to the context of transportation systems with the purpose of understanding and predicting mobility behavior and modeling and optimizing the transportation system as a whole He has Masters 2000 and PhD 2005 degrees in Computer Science from University of Coimbra and has authoredcoauthored over 70 journal and conference papers in areas such as pattern recognition transportation knowledge based systems and cognitive science Francisco was previously Research Scientist at MIT and Assistant Professor in University of Coimbra He was awarded several prestigious prizes including an IEEE Achievements award in 2009 the Singapore GYSS Challenge in 2013 and the Pyke Johnson award from Transportation Research Board in 2015 </s>
<s>[INST] CrossWOZ: A Large-Scale Chinese Cross-Domain Task-Oriented Dialogue Dataset [/INST] Recently there have been a variety of taskoriented dialogue models thanks to the prosperity of neural architectures BIBREF0 BIBREF1 BIBREF2 BIBREF3 BIBREF4 BIBREF5 However the research is still largely limited by the availability of largescale highquality dialogue data Many corpora have advanced the research of taskoriented dialogue systems most of which are single domain conversations including ATIS BIBREF6 DSTC 2 BIBREF7 Frames BIBREF8 KVRET BIBREF9 WOZ 20 BIBREF10 and M2M BIBREF11 Despite the significant contributions to the community these datasets are still limited in size language variation or task complexity Furthermore there is a gap between existing dialogue corpora and reallife human dialogue data In reallife conversations it is natural for humans to transition between different domains or scenarios while still maintaining coherent contexts Thus reallife dialogues are much more complicated than those dialogues that are only simulated within a single domain To address this issue some multidomain corpora have been proposed BIBREF12 BIBREF13 The most notable corpus is MultiWOZ BIBREF12 a largescale multidomain dataset which consists of crowdsourced humantohuman dialogues It contains 10K dialogue sessions and 143K utterances for 7 domains with annotation of systemside dialogue states and dialogue acts However the state annotations are noisy BIBREF14 and userside dialogue acts are missing The dependency across domains is simply embodied in imposing the same prespecified constraints on different domains such as requiring both a hotel and an attraction to locate in the center of the town In comparison to the abundance of English dialogue data surprisingly there is still no widely recognized Chinese taskoriented dialogue corpus In this paper we propose CrossWOZ a largescale Chinese multidomain crossdomain taskoriented dialogue dataset An dialogue example is shown in Figure FIGREF1 We compare CrossWOZ to other corpora in Table TABREF5 and TABREF6 Our dataset has the following features comparing to other corpora particularly MultiWOZ BIBREF12 The dependency between domains is more challenging because the choice in one domain will affect the choices in related domains in CrossWOZ As shown in Figure FIGREF1 and Table TABREF6 the hotel must be near the attraction chosen by the user in previous turns which requires more accurate context understanding It is the first Chinese corpus that contains largescale multidomain taskoriented dialogues consisting of 6K sessions and 102K utterances for 5 domains attraction restaurant hotel metro and taxi Annotation of dialogue states and dialogue acts is provided for both the system side and user side The annotation of user states enables us to track the conversation from the users perspective and can empower the development of more elaborate user simulators In this paper we present the process of dialogue collection and provide detailed data analysis of the corpus Statistics show that our crossdomain dialogues are complicated To facilitate model comparison benchmark models are provided for different modules in pipelined taskoriented dialogue systems including natural language understanding dialogue state tracking dialogue policy learning and natural language generation We also provide a user simulator which will facilitate the development and evaluation of dialogue models on this corpus The corpus and the benchmark models are publicly available at httpsgithubcomthucoaiCrossWOZ According to whether the dialogue agent is human or machine we can group the collection methods of existing taskoriented dialogue datasets into three categories The first one is humantohuman dialogues One of the earliest and wellknown ATIS dataset BIBREF6 used this setting followed by BIBREF8 BIBREF9 BIBREF10 BIBREF15 BIBREF16 and BIBREF12 Though this setting requires many human efforts it can collect natural and diverse dialogues The second one is humantomachine dialogues which need a ready dialogue system to converse with humans The famous Dialogue State Tracking Challenges provided a set of humantomachine dialogue data BIBREF17 BIBREF7 The performance of the dialogue system will largely influence the quality of dialogue data The third one is machinetomachine dialogues It needs to build both user and system simulators to generate dialogue outlines then use templates BIBREF3 to generate dialogues or further employ people to paraphrase the dialogues to make them more natural BIBREF11 BIBREF13 It needs much less human effort However the complexity and diversity of dialogue policy are limited by the simulators To explore dialogue policy in multidomain scenarios and to collect natural and diverse dialogues we resort to the humantohuman setting Most of the existing datasets only involve single domain in one dialogue except MultiWOZ BIBREF12 and Schema BIBREF13 MultiWOZ dataset has attracted much attention recently due to its large size and multidomain characteristics It is at least one order of magnitude larger than previous datasets amounting to 8438 dialogues and 115K turns in the training set It greatly promotes the research on multidomain dialogue modeling such as policy learning BIBREF18 state tracking BIBREF19 and contexttotext generation BIBREF20 Recently the Schema dataset is collected in a machinetomachine fashion resulting in 16142 dialogues and 330K turns for 16 domains in the training set However the multidomain dependency in these two datasets is only embodied in imposing the same prespecified constraints on different domains such as requiring a restaurant and an attraction to locate in the same area or the city of a hotel and the destination of a flight to be the same Table TABREF6 Table TABREF5 presents a comparison between our dataset with other taskoriented datasets In comparison to MultiWOZ our dataset has a comparable scale 5012 dialogues and 84K turns in the training set The average number of domains and turns per dialogue are larger than those of MultiWOZ which indicates that our task is more complex The crossdomain dependency in our dataset is natural and challenging For example as shown in Table TABREF6 the system needs to recommend a hotel near the attraction chosen by the user in previous turns Thus both system recommendation and user selection will dynamically impact the dialogue We also allow the same domain to appear multiple times in a user goal since a tourist may want to go to more than one attraction To better track the conversation flow and model user dialogue policy we provide annotation of user states in addition to system states and dialogue acts While the system state tracks the dialogue history the user state is maintained by the user and indicates whether the subgoals have been completed which can be used to predict user actions This information will facilitate the construction of the user simulator To the best of our knowledge CrossWOZ is the first largescale Chinese dataset for taskoriented dialogue systems which will largely alleviate the shortage of Chinese taskoriented dialogue corpora that are publicly available Our corpus is to simulate scenarios where a traveler seeks tourism information and plans her or his travel in Beijing Domains include hotel attraction restaurant metro and taxi The data collection process is summarized as below Database Construction we crawled travel information in Beijing from the Web including Hotel Attraction and Restaurant domains hereafter we name the three domains as HAR domains Then we used the metro information of entities in HAR domains to build the metro database For the taxi domain there is no need to store the information Instead we can call the API directly if necessary Goal Generation a multidomain goal generator was designed based on the database The relation across domains is captured in two ways One is to constrain two targets that locate near each other The other is to use a taxi or metro to commute between two targets in HAR domains mentioned in the context To make workers understand the task more easily we crafted templates to generate natural language descriptions for each structured goal Dialogue Collection before the formal data collection starts we required the workers to make a small number of dialogues and gave them feedback about the dialogue quality Then welltrained workers were paired to converse according to the given goals The workers were also asked to annotate both user states and system states Dialogue Annotation we used some rules to automatically annotate dialogue acts according to user states system states and dialogue histories To evaluate the quality of the annotation of dialogue acts and states three experts were employed to manually annotate dialogue acts and states for 50 dialogues The results show that our annotations are of high quality Finally each dialogue contains a structured goal a task description user states system states dialogue acts and utterances We collected 465 attractions 951 restaurants and 1133 hotels in Beijing from the Web Some statistics are shown in Table TABREF11 There are three types of slots for each entity common slots such as name and address binary slots for hotel services such as wakeup call nearby attractionsrestaurantshotels slots that contain nearby entities in the attraction restaurant and hotel domains Since it is not usual to find another nearby hotel in the hotel domain we did not collect such information This nearby relation allows us to generate natural crossdomain goals such as find another attraction near the first one and find a restaurant near the attraction Nearest metro stations of HAR entities form the metro database In contrast we provided the pseudo car type and plate number for the taxi domain To avoid generating overly complex goals each goal has at most five subgoals To generate more natural goals the subgoals can be of the same domain such as two attractions near each other The goal is represented as a list of subgoal id domain slot value tuples named as semantic tuples The subgoal id is used to distinguish subgoals which may be in the same domain There are two types of slots informable slots which are the constraints that the user needs to inform the system and requestable slots which are the information that the user needs to inquire from the system As shown in Table TABREF13 besides common informable slots italic values whose values are determined before the conversation we specially design crossdomain informable slots bold values whose values refer to other subgoals Crossdomain informable slots utilize subgoal id to connect different subgoals Thus the actual constraints vary according to the different contexts instead of being prespecified The values of common informable slots are sampled randomly from the database Based on the informable slots users are required to gather the values of requestable slots blank values in Table TABREF13 through conversation There are four steps in goal generation First we generate independent subgoals in HAR domains For each domain in HAR domains with the same probability mathcal P we generate a subgoal while with the probability of 1mathcal P we do not generate any subgoal for this domain Each subgoal has common informable slots and requestable slots As shown in Table TABREF15 all slots of HAR domains can be requestable slots while the slots with an asterisk can be common informable slots Second we generate crossdomain subgoals in HAR domains For each generated subgoal eg the attraction subgoal in Table TABREF13 if its requestable slots contain nearby hotels we generate an additional subgoal in the hotel domain eg the hotel subgoal in Table TABREF13 with the probability of mathcal Pattractionrightarrow hotel Of course the selected hotel must satisfy the nearby relation to the attraction entity Similarly we do not generate any additional subgoal in the hotel domain with the probability of 1mathcal Pattractionrightarrow hotel This also works for the attraction and restaurant domains mathcal Photelrightarrow hotel0 since we do not allow the user to find the nearby hotels of one hotel Third we generate subgoals in the metro and taxi domains With the probability of mathcal Ptaxi we generate a subgoal in the taxi domain eg the taxi subgoal in Table TABREF13 to commute between two entities of HAR domains that are already generated It is similar for the metro domain and we set mathcal Pmetromathcal Ptaxi All slots in the metro or taxi domain appear in the subgoals and must be filled As shown in Table TABREF15 from and to slots are always crossdomain informable slots while others are always requestable slots Last we rearrange the order of the subgoals to generate more natural and logical user goals We require that a subgoal should be followed by its referred subgoal as immediately as possible To make the workers aware of this crossdomain feature we additionally provide a task description for each user goal in natural language which is generated from the structured goal by handcrafted templates Compared with the goals whose constraints are all prespecified our goals impose much more dependency between different domains which will significantly influence the conversation The exact values of crossdomain informable slots are finally determined according to the dialogue context We developed a specialized website that allows two workers to converse synchronously and make annotations online On the website workers are free to choose one of the two roles tourist user or system wizard Then two paired workers are sent to a chatroom The user needs to accomplish the allocated goal through conversation while the wizard searches the database to provide the necessary information and gives responses Before the formal data collection we trained the workers to complete a small number of dialogues by giving them feedback Finally 90 welltrained workers are participating in the data collection In contrast MultiWOZ BIBREF12 hired more than a thousand workers to converse asynchronously Each worker received a dialogue context to review and need to respond for only one turn at a time The collected dialogues may be incoherent because workers may not understand the context correctly and multiple workers contributed to the same dialogue session possibly leading to more variance in the data quality For example some workers expressed two mutually exclusive constraints in two consecutive user turns and failed to eliminate the systems confusion in the next several turns Compared with MultiWOZ our synchronous conversation setting may produce more coherent dialogues The user state is the same as the user goal before a conversation starts At each turn the user needs to 1 modify the user state according to the system response at the preceding turn 2 select some semantic tuples in the user state which indicates the dialogue acts and 3 compose the utterance according to the selected semantic tuples In addition to filling the required values and updating crossdomain informable slots with real values in the user state the user is encouraged to modify the constraints when there is no result under such constraints The change will also be recorded in the user state Once the goal is completed all the values in the user state are filled the user can terminate the dialogue We regard the database query as the system state which records the constraints of each domain till the current turn At each turn the wizard needs to 1 fill the query according to the previous user response and search the database if necessary 2 select the retrieved entities and 3 respond in natural language based on the information of the selected entities If none of the entities satisfy all the constraints the wizard will try to relax some of them for a recommendation resulting in multiple queries The first query records original user constraints while the last one records the constraints relaxed by the system After collecting the conversation data we used some rules to annotate dialogue acts automatically Each utterance can have several dialogue acts Each dialogue act is a tuple that consists of intent domain slot and value We predefine 6 types of intents and use the update of the user state and system state as well as keyword matching to obtain dialogue acts For the user side dialogue acts are mainly derived from the selection of semantic tuples that contain the information of domain slot and value For example if 1 Attraction fee free in Table TABREF13 is selected by the user then Inform Attraction fee free is labelled If 1 Attraction name is selected then Request Attraction name none is labelled If 2 Hotel name near id1 is selected then Select Hotel srcdomain Attraction is labelled This intent is specially designed for the nearby constraint For the system side we mainly applied keyword matching to label dialogue acts Inform intent is derived by matching the system utterance with the information of selected entities When the wizard selects multiple retrieved entities and recommend them Recommend intent is labeled When the wizard expresses that no result satisfies user constraints NoOffer is labeled For General intents such as goodbye thanks at both user and system sides keyword matching is applied We also obtained a binary label for each semantic tuple in the user state which indicates whether this semantic tuple has been selected to be expressed by the user This annotation directly illustrates the progress of the conversation To evaluate the quality of the annotation of dialogue acts and states both user and system states three experts were employed to manually annotate dialogue acts and states for the same 50 dialogues 806 utterances 10 for each goal type see Section SECREF4 Since dialogue act annotation is not a classification problem we didnt use Fleiss kappa to measure the agreement among experts We used dialogue act F1 and state accuracy to measure the agreement between each two experts annotations The average dialogue act F1 is 9459 and the average state accuracy is 9355 We then compared our annotations with each experts annotations which are regarded as gold standard The average dialogue act F1 is 9536 and the average state accuracy is 9495 which indicates the high quality of our annotations After removing uncompleted dialogues we collected 6012 dialogues in total The dataset is split randomly for trainingvalidationtest where the statistics are shown in Table TABREF25 The average number of subgoals in our dataset is 324 which is much larger than that in MultiWOZ 180 BIBREF12 and Schema 184 BIBREF13 The average number of turns 169 is also larger than that in MultiWOZ 137 These statistics indicate that our dialogue data are more complex According to the type of user goal we group the dialogues in the training set into five categories 417 dialogues have only one subgoal in HAR domains 1573 dialogues have multiple subgoals 2sim 3 in HAR domains However these subgoals do not have crossdomain informable slots 691 dialogues have multiple subgoals in HAR domains and at least one subgoal in the metro or taxi domain 3sim 5 subgoals The subgoals in HAR domains do not have crossdomain informable slots 1759 dialogues have multiple subgoals 2sim 5 in HAR domains with crossdomain informable slots 572 dialogues have multiple subgoals in HAR domains with crossdomain informable slots and at least one subgoal in the metro or taxi domain 3sim 5 subgoals The data statistics are shown in Table TABREF26 As mentioned in Section SECREF14 we generate independent multidomain cross multidomain and traffic domain subgoals one by one Thus in terms of the task complexity we have SMCM and MMTCMT which is supported by the average number of subgoals semantic tuples and turns per dialogue in Table TABREF26 The average number of tokens also becomes larger when the goal becomes more complex About 60 of dialogues MT CM and CMT have crossdomain informable slots Because of the limit of maximal subgoals number the ratio of dialogue number of CMT to CM is smaller than that of MT to M CM and CMT are much more challenging than other tasks because additional crossdomain constraints in HAR domains are strict and will result in more NoOffer situations ie the wizard finds no result that satisfies the current constraints In this situation the wizard will try to relax some constraints and issue multiple queries to find some results for a recommendation while the user will compromise and change the original goal The negotiation process is captured by NoOffer rate Multiquery rate and Goal change rate in Table TABREF26 In addition Multiquery rate suggests that each subgoal in M and MT is as easy to finish as the goal in S The distribution of dialogue length is shown in Figure FIGREF27 which is an indicator of the task complexity Most singledomain dialogues terminate within 10 turns The curves of M and MT are almost of the same shape which implies that the traffic task requires two additional turns on average to complete the task The curves of CM and CMT are less similar This is probably because CM goals that have 5 subgoals about 22 can not further generate a subgoal in traffic domains and become CMT goals Our corpus is unique in the following aspects Complex user goals are designed to favor interdomain dependency and natural transition between multiple domains In return the collected dialogues are more complex and natural for crossdomain dialogue tasks A wellcontrolled synchronous setting is applied to collect humantohuman dialogues This ensures the high quality of the collected dialogues Explicit annotations are provided at not only the system side but also the user side This feature allows us to model user behaviors or develop user simulators more easily CrossWOZ can be used in different tasks or settings of a taskoriented dialogue system To facilitate further research we provided benchmark models for different components of a pipelined taskoriented dialogue system Figure FIGREF32 including natural language understanding NLU dialogue state tracking DST dialogue policy learning and natural language generation NLG These models are implemented using ConvLab2 BIBREF21 an opensource taskoriented dialog system toolkit We also provided a rulebased user simulator which can be used to train dialogue policy and generate simulated dialogue data The benchmark models and simulator will greatly facilitate researchers to compare and evaluate their models on our corpus Task The natural language understanding component in a taskoriented dialogue system takes an utterance as input and outputs the corresponding semantic representation namely a dialogue act The task can be divided into two subtasks intent classification that decides the intent type of an utterance and slot tagging which identifies the value of a slot Model We adapted BERTNLU from ConvLab2 BERT BIBREF22 has shown strong performance in many NLP tasks We use Chinese pretrained BERT BIBREF23 for initialization and then finetune the parameters on CrossWOZ We obtain word embeddings and the sentence representation embedding of CLS from BERT Since there may exist more than one intent in an utterance we modify the traditional method accordingly For dialogue acts of inform and recommend intents such as intentInform domainAttraction slotfee valuefree whose values appear in the sentence we perform sequential labeling using an MLP which takes word embeddings free as input and outputs tags in BIO schema BInformAttractionfee For each of the other dialogue acts eg intentRequest domainAttraction slotfee that do not have actual values we use another MLP to perform binary classification on the sentence representation to predict whether the sentence should be labeled with this dialogue act To incorporate context information we use the same BERT to get the embedding of last three utterances We separate the utterances with SEP tokens and insert a CLS token at the beginning Then each original input of the two MLP is concatenated with the context embedding embedding of CLS serving as the new input We also conducted an ablation test by removing context information We trained models with both systemside and userside utterances Result Analysis The results of the dialogue act prediction F1 score are shown in Table TABREF31 We further tested the performance on different intent types as shown in Table TABREF35 In general BERTNLU performs well with context information The performance on cross multidomain dialogues CM and CMT drops slightly which may be due to the decrease of General intent and the increase of NoOffer as well as Select intent in the dialogue data We also noted that the F1 score of Select intent is remarkably lower than those of other types but context information can improve the performance significantly Since recognizing domain transition is a key factor for a crossdomain dialogue system natural language understanding models need to utilize context information more effectively Task Dialogue state tracking is responsible for recognizing user goals from the dialogue context and then encoding the goals into the predefined system state Traditional state tracking models take as input user dialogue acts parsed by natural language understanding modules while recently there are joint models obtaining the system state directly from the context Model We implemented a rulebased model RuleDST and adapted TRADE Transferable Dialogue State Generator BIBREF19 in this experiment RuleDST takes as input the previous system state and the last user dialogue acts Then the system state is updated according to handcrafted rules For example If one of user dialogue acts is intentInform domainAttraction slotfee valuefree then the value of the fee slot in the attraction domain will be filled with free TRADE generates the system state directly from all the previous utterances using a copy mechanism As mentioned in Section SECREF18 the first query of the system often records full user constraints while the last one records relaxed constraints for recommendation Thus the last one involves system policy which is out of the scope of state tracking We used the first query for these models and left state tracking with recommendation for future work Result Analysis We evaluated the joint state accuracy percentage of exact matching of these two models Table TABREF31 TRADE the stateoftheart model on MultiWOZ performs poorly on our dataset indicating that more powerful state trackers are necessary At the test stage RuleDST can access the previous gold system state and user dialogue acts which leads to higher joint state accuracy than TRADE Both models perform worse on cross multidomain dialogues CM and CMT To evaluate the ability of modeling crossdomain transition we further calculated joint state accuracy for those turns that receive Select intent from users eg Find a hotel near the attraction The performances are 116 and 120 for RuleDST and TRADE respectively showing that they are not able to track domain transition well Task Dialogue policy receives state s and outputs system action a at each turn Compared with the state given by a dialogue state tracker s may have more information such as the last user dialogue acts and the entities provided by the backend database Model We adapted a vanilla policy trained in a supervised fashion from ConvLab2 SL policy The state s consists of the last system dialogue acts last user dialogue acts system state of the current turn the number of entities that satisfy the constraints in the current domain and a terminal signal indicating whether the user goal is completed The action a is delexicalized dialogue acts of current turn which ignores the exact values of the slots where the values will be filled back after prediction Result Analysis As illustrated in Table TABREF31 there is a large gap between F1 score of exact dialogue act and F1 score of delexicalized dialogue act which means we need a powerful system state tracker to find correct entities The result also shows that cross multidomain dialogues CM and CMT are harder for system dialogue act prediction Additionally when there is Select intent in preceding user dialogue acts the F1 score of exact dialogue act and delexicalized dialogue act are 4153 and 5439 respectively This shows that the policy performs poorly for crossdomain transition Task Natural language generation transforms a structured dialogue act into a natural language sentence It usually takes delexicalized dialogue acts as input and generates a templatestyle sentence that contains placeholders for slots Then the placeholders will be replaced by the exact values which is called lexicalization Model We provided a templatebased model named TemplateNLG and SCLSTM Semantically Conditioned LSTM BIBREF1 for natural language generation For TemplateNLG we extracted templates from the training set and manually added some templates for infrequent dialogue acts For SCLSTM we adapted the implementation on MultiWOZ and trained two SCLSTM with systemside and userside utterances respectively Result Analysis We calculated corpuslevel BLEU as used by BIBREF1 We took all utterances with the same delexcalized dialogue acts as references 100 references on average which results in high BLEU score For userside utterances the BLEU score for TemplateNLG is 05780 while the BLEU score for SCLSTM is 07858 For systemside the two scores are 06828 and 08595 As exemplified in Table TABREF39 the gap between the two models can be attributed to that SCLSTM generates common pattern while TemplateNLG retrieves original sentence which has more specific information We do not provide BLEU scores for different goal types namely S M CM etc because BLEU scores on different corpus are not comparable Task A user simulator imitates the behavior of users which is useful for dialogue policy learning and automatic evaluation A user simulator at dialogue act level eg the Usr Policy in Figure FIGREF32 receives the system dialogue acts and outputs user dialogue acts while a user simulator at natural language level eg the left part in Figure FIGREF32 directly takes systems utterance as input and outputs users utterance Model We built a rulebased user simulator that works at dialogue act level Different from agendabased BIBREF24 user simulator that maintains a stacklike agenda our simulator maintains the user state straightforwardly Section SECREF17 The simulator will generate a user goal as described in Section SECREF14 At each user turn the simulator receives system dialogue acts modifies its state and outputs user dialogue acts according to some handcrafted rules For example if the system inform the simulator that the attraction is free then the simulator will fill the fee slot in the user state with free and ask for the next empty slot such as address The simulator terminates when all requestable slots are filled and all crossdomain informable slots are filled by real values Result Analysis During the evaluation we initialized the user state of the simulator using the previous gold user state The input to the simulator is the gold system dialogue acts We used joint state accuracy percentage of exact matching to evaluate user state prediction and F1 score to evaluate the prediction of user dialogue acts The results are presented in Table TABREF31 We can observe that the performance on complex dialogues CM and CMT is remarkably lower than that on simple ones S M and MT This simple rulebased simulator is provided to facilitate dialogue policy learning and automatic evaluation and our corpus supports the development of more elaborated simulators as we provide the annotation of userside dialogue states and dialogue acts In addition to corpusbased evaluation for each module we also evaluated the performance of a whole dialogue system using the user simulator as described above Three configurations were explored Simulation at dialogue act level As shown by the dashed connections in Figure FIGREF32 we used the aforementioned simulator at the user side and assembled the dialogue system with RuleDST and SL policy Simulation at natural language level using TemplateNLG As shown by the solid connections in Figure FIGREF32 the simulator and the dialogue system were equipped with BERTNLU and TemplateNLG additionally Simulation at natural language level using SCLSTM TemplateNLG was replaced with SCLSTM in the second configuration When all the slots in a user goal are filled by real values the simulator terminates This is regarded as task finish Its worth noting that task finish does not mean the task is success because the system may provide wrong information We calculated task finish rate on 1000 times simulations for each goal type See Table TABREF31 Findings are summarized below Cross multidomain tasks CM and CMT are much harder to finish Comparing M and MT although each module performs well in traffic domains additional subgoals in these domains are still difficult to accomplish The systemlevel performance is largely limited by RuleDST and SL policy Although the corpusbased performance of NLU and NLG modules is high the two modules still harm the performance Thus more powerful models are needed for all components of a pipelined dialogue system TemplateNLG has a much lower BLEU score but performs better than SCLSTM in natural language level simulation This may be attributed to that BERTNLU prefers templates retrieved from the training set In this paper we present the first largescale Chinese CrossDomain taskoriented dialogue dataset CrossWOZ It contains 6K dialogues and 102K utterances for 5 domains with the annotation of dialogue states and dialogue acts at both user and system sides About 60 of the dialogues have crossdomain user goals which encourage natural transition between related domains Thanks to the rich annotation of dialogue states and dialogue acts at both user side and system side this corpus provides a new testbed for a wide range of tasks to investigate crossdomain dialogue modeling such as dialogue state tracking policy learning etc Our experiments show that the crossdomain constraints are challenging for all these tasks The transition between related domains is especially challenging to model Besides corpusbased componentwise evaluation we also performed systemlevel evaluation with a user simulator which requires more powerful models for all components of a pipelined crossdomain dialogue system This work was supported by the National Science Foundation of China Grant No 6193601061876096 and the National Key RD Program of China Grant No 2018YFC0830200 We would like to thank THUNUS NExT JointLab for the support We would also like to thank Ryuichi Takanobu and Fei Mi for their constructive comments We are grateful to our action editor Bonnie Webber and the anonymous reviewers for their valuable suggestions and feedback </s>
<s>[INST] BERTRAM: Improved Word Embeddings Have Big Impact on Contextualized Model Performance [/INST] As traditional word embedding algorithms BIBREF1 are known to struggle with rare words several techniques for improving their representations have been proposed over the last few years These approaches exploit either the contexts in which rare words occur BIBREF2 BIBREF3 BIBREF4 BIBREF5 their surfaceform BIBREF6 BIBREF7 BIBREF8 or both BIBREF9 BIBREF10 However all of these approaches are designed for and evaluated on uncontextualized word embeddings With the recent shift towards contextualized representations obtained from pretrained deep language models BIBREF11 BIBREF12 BIBREF13 BIBREF14 the question naturally arises whether these approaches are facing the same problem As all of them already handle rare words implicitly using methods such as bytepair encoding BIBREF15 and WordPiece embeddings BIBREF16 or even characterlevel CNNs BIBREF17 it is unclear whether these models even require special treatment of rare words However the listed methods only make use of surfaceform information whereas BIBREF9 found that for covering a wide range of rare words it is crucial to consider both surfaceform and contexts Consistently BIBREF0 recently showed that for BERT BIBREF13 a popular pretrained language model based on a Transformer architecture BIBREF18 performance on a rare word probing task can significantly be improve by relearning representations of rare words using Attentive Mimicking BIBREF19 However their proposed model is limited in two important respects For processing contexts it uses a simple bagofwords model throwing away much of the available information It combines form and context only in a shallow fashion thus preventing both input signals from sharing information in any sophisticated manner Importantly this limitation applies not only to their model but to all previous work on obtaining representations for rare words by leveraging form and context While using bagofwords models is a reasonable choice for uncontextualized embeddings which are often themselves based on such models BIBREF1 BIBREF7 it stands to reason that they are suboptimal for contextualized embeddings based on positionaware deep neural architectures To overcome these limitations we introduce Bertram BERT for Attentive Mimicking a novel architecture for understanding rare words that combines a pretrained BERT language model with Attentive Mimicking BIBREF19 Unlike previous approaches making use of language models BIBREF5 our approach integrates BERT in an endtoend fashion and directly makes use of its hidden states By giving Bertram access to both surface form and context information already at its very lowest layer we allow for a deep connection and exchange of information between both input signals For various reasons assessing the effectiveness of methods like Bertram in a contextualized setting poses a huge difficulty While most previous work on rare words was evaluated on datasets explicitly focusing on such words BIBREF6 BIBREF3 BIBREF4 BIBREF5 BIBREF10 all of these datasets are tailored towards contextindependent embeddings and thus not suitable for evaluating our proposed model Furthermore understanding rare words is of negligible importance for most commonly used downstream task datasets To evaluate our proposed model we therefore introduce a novel procedure that allows us to automatically turn arbitrary text classification datasets into ones where rare words are guaranteed to be important This is achieved by replacing classificationrelevant frequent words with rare synonyms obtained using semantic resources such as WordNet BIBREF20 Using this procedure we extract rare word datasets from three commonly used text or text pair classification datasets MNLI BIBREF21 AGs News BIBREF22 and DBPedia BIBREF23 On both the WNLaMPro dataset of BIBREF0 and all three soobtained datasets our proposed Bertram model outperforms previous work by a large margin In summary our contributions are as follows We show that a pretrained BERT instance can be integrated into Attentive Mimicking resulting in much better context representations and a deeper connection of form and context We design a procedure that allows us to automatically transform text classification datasets into datasets for which rare words are guaranteed to be important We show that Bertram achieves a new stateoftheart on the WNLaMPro probing task BIBREF0 and beats all baselines on rare word instances of AGs News MNLI and DBPedia resulting in an absolute improvement of up to 24 over a BERT baseline Incorporating surfaceform information eg morphemes characters or character ngrams is a commonly used technique for improving word representations For contextindependent word embeddings this information can either be injected into a given embedding space BIBREF6 BIBREF8 or a model can directly be given access to it during training BIBREF7 BIBREF24 BIBREF25 In the area of contextualized representations many architectures employ subword segmentation methods BIBREF12 BIBREF13 BIBREF26 BIBREF14 whereas others use convolutional neural networks to directly access characterlevel information BIBREF27 BIBREF11 BIBREF17 Complementary to surface form another useful source of information for understanding rare words are the contexts in which they occur BIBREF2 BIBREF3 BIBREF4 As recently shown by BIBREF19 BIBREF9 combining form and context leads to significantly better results than using just one of both input signals for a wide range of tasks While all aforementioned methods are based on simple bagofwords models BIBREF5 recently proposed an architecture based on the context2vec language model BIBREF28 However in contrast to our work they i do not incorporate surfaceform information and ii do not directly access the hidden states of the language model but instead simply use its output distribution There are several datasets explicitly focusing on rare words eg the Stanford Rare Word dataset of BIBREF6 the Definitional Nonce dataset of BIBREF3 and the Contextual Rare Word dataset BIBREF4 However all of these datasets are only suitable for evaluating contextindependent word representations Our proposed method of generating rare word datasets is loosely related to adversarial example generation methods such as HotFlip BIBREF29 which manipulate the input to change a models prediction We use a similar mechanism to determine which words in a given sentence are most important and replace these words with rare synonyms We review the architecture of the formcontext model FCM BIBREF9 which forms the basis for our model Given a set of ddimensional highquality embeddings for frequent words FCM can be used to induce embeddings for infrequent words that are appropriate for the given embedding space This is done as follows Given a word w and a context C in which it occurs a surfaceform embedding vwCtextform in mathbb Rd is obtained similar to BIBREF7 by averaging over embeddings of all ngrams in w these ngram embeddings are learned during training Similarly a context embedding vwCtextcontext in mathbb Rd is obtained by averaging over the embeddings of all words in C The soobtained form and context embeddings are then combined using a gate with parameters w in mathbb R2d b in mathbb R and sigma denoting the sigmoid function allowing the model to decide for each pair xy of form and context embeddings how much attention should be paid to x and y respectively The final representation of w is then simply a weighted sum of form and context embeddings where alpha gvwCtextform vwCtextcontext and A is a dtimes d matrix that is learned during training While the contextpart of FCM is able to capture the broad topic of numerous rare words in many cases it is not able to obtain a more concrete and detailed understanding thereof BIBREF9 This is hardly surprising given the models simplicity it does for example make no use at all of the relative positions of context words Furthermore the simple gating mechanism results in only a shallow combination of form and context That is the model is not able to combine form and context until the very last step While it can choose how much to attend to form and context respectively the corresponding embeddings do not share any information and thus cannot influence each other in any way To overcome both limitations described above we introduce Bertram an approach that combines a pretrained BERT language model BIBREF13 with Attentive Mimicking BIBREF19 To this end let dh be the hidden dimension size and ltextmax be the number of layers for the BERT model being used We denote with et the uncontextualized embedding assigned to a token t by BERT and given a sequence of such uncontextualized embeddings mathbf e e1 ldots en we denote by textbf hjltextbf e the contextualized representation of the jth token at layer l when the model is given mathbf e as input Given a word w and a context C w1 ldots wn in which it occurs let mathbf t t1 ldots tm with m ge n be the sequence obtained from C by i replacing w with a MASK token and ii tokenizing the soobtained sequence to match the BERT vocabulary furthermore let i denote the index for which ti texttt MASK Perhaps the most simple approach for obtaining a context embedding from C using BERT is to define where mathbf e et1 ldots etm The soobtained context embedding can then be combined with its form counterpart as described in Eq DISPLAYFORM8 While this achieves our first goal of using a more sophisticated context model that can potentially gain a deeper understanding of a word than just its broad topic the soobtained architecture still only combines form and context in a shallow fashion We thus refer to it as the shallow variant of our model and investigate two alternative approaches replace and add that work as follows Replace Before computing the context embedding we replace the uncontextualized embedding of the MASK token with the words surfaceform embedding As during BERT pretraining words chosen for prediction are replaced with MASK tokens only 80 of the time and kept unchanged 10 of the time we hypothesize that even without further training BERT is able to make use of form embeddings ingested this way Add Before computing the context embedding we prepad the input with the surfaceform embedding of w followed by a colon We also experimented with various other prefixes but ended up choosing this particular strategy because we empirically found that after masking a token t adding the sequence t at the beginning helps BERT the most in recovering this very token at the masked position tnodestylerectangle inner sep01cm minimum height4ex text centeredtext height15ex text depth025ex opnodestyledraw rectangle rounded corners minimum height4ex minimum width4ex text centered arrowstyledrawstealth As for both variants surfaceform information is directly and deeply integrated into the computation of the context embedding we do not require any further gating mechanism and may directly set vwC A cdot vtextcontextwC However we note that for the add variant the contextualized representation of the MASK token is not the only natural candidate to be used for computing the final embedding We might just as well look at the contextualized representation of the surfaceform based embedding added at the very first position Therefore we also try a shallow combination of both embeddings Note however that unlike FCM we combine the contextualized representations that is the form part was already influenced by the context part and vice versa before combining them using a gate For this combination we define with Aprime in mathbb Rd times dh being an additional learnable parameter We then combine the two contextualized embeddings similar to Eq DISPLAYFORM8 as where alpha ghtextformwC htextcontextwC We refer to this final alternative as the addgated approach The model architecture for this variant can be seen in Figure FIGREF14 left As in many cases not just one but a handful of contexts is known for a rare word we follow the approach of BIBREF19 to deal with multiple contexts We add an Attentive Mimicking head on top of our model as can be seen in Figure FIGREF14 right That is given a set of contexts mathcal C lbrace C1 ldots Cmrbrace and the corresponding embeddings vwC1 ldots vwCm we apply a selfattention mechanism to all embeddings allowing the model to distinguish informative contexts from uninformative ones The final embedding vw mathcal C is then a linear combination of the embeddings obtained from each context where the weight of each embedding is determined based on the selfattention layer For further details on this mechanism we refer to BIBREF19 Like previous work we use mimicking BIBREF8 as a training objective That is given a frequent word w with known embedding ew and a set of corresponding contexts mathcal C Bertram is trained to minimize Vert ew vw mathcal CVert 2 As training Bertram endtoend requires much computation processing a single training instance wmathcal C is as costly as processing an entire batch of mathcal C examples in the original BERT architecture we resort to the following threestage training process We train only the form part ie our loss for a single example w mathcal C is Vert ew vtextformw mathcal C Vert 2 We train only the context part minimizing Vert ew A cdot vtextcontextw mathcal C Vert 2 where the context embedding is obtained using the shallow variant of Bertram Furthermore we exclude all of BERTs parameters from our optimization We combine the pretrained formonly and contextonly model and train all additional parameters Pretraining the form and context parts individually allows us to train the full model for much fewer steps with comparable results Importantly for the first two stages of our training procedure we do not have to backpropagate through the entire BERT model to obtain all required gradients drastically increasing the training speed To measure the quality of rare word representations in a contextualized setting we would ideally need text classification datasets with the following two properties A model that has no understanding of rare words at all should perform close to 0 A model that perfectly understands rare words should be able to classify every instance correctly Unfortunately this requirement is not even remotely fulfilled by most commonly used datasets simply because rare words occur in only a few entries and when they do they are often of negligible importance To solve this problem we devise a procedure to automatically transform existing text classification datasets such that rare words become important For this procedure we require a pretrained language model M as a baseline an arbitrary text classification dataset mathcal D containing labelled instances mathbf x y and a substitution dictionary S mapping each word w to a set of rare synonyms Sw Given these ingredients our procedure consists of three steps i splitting the dataset into a train set and a set of test candidates ii training the baseline model on the train set and iii modifying a subset of the test candidates to generate the final test set We partition mathcal D into a train set mathcal Dtexttrain and a set of test candidates mathcal Dtextcand with the latter containing all instances mathbf xy in mathcal D such that for at least one word w in mathbf x Sw ne emptyset Additionally we require that the training set consists of at least one third of the entire data We finetune M on mathcal Dtexttrain Let mathbf x y in mathcal Dtexttrain where mathbf x w1 ldots wn is a sequence of words We deviate from the standard finetuning procedure of BIBREF13 in three respects We randomly replace 5 of all words in mathbf x with a MASK token This allows the model to cope with missing or unknown words a prerequisite for our final test set generation As an alternative to overwriting the language models uncontextualized embeddings for rare words we also want to allow models to simply add an alternative representation during test time in which case we simply separate both representations by a slash To accustom the language model to this duplication of words we replace each word wi with wi wi with a probability of 10 To make sure that the model does not simply learn to always focus on the first instance during training we randomly mask each of the two repetitions with probability 25 We do not finetune the models embedding layer In preliminary experiments we found this not to hurt performance Let py mid mathbf x be the probability that the finetuned model M assigns to class y given input mathbf x and let be the models prediction for input mathbf x where mathcal Y denotes the set of all labels For generating our test set we only consider candidates that are classified correctly by the baseline model ie candidates mathbf x y in mathcal Dtextcand with Mmathbf x y For each such entry let mathbf x w1 ldots wn and let mathbf xwi t be the sequence obtained from mathbf x by replacing wi with t We compute ie we select the word wi whose masking pushes the models prediction the furthest away from the correct label If removing this word already changes the models prediction that is Mmathbf xwi texttt MASK ne y we select a random rare synonym hatwi in Swi and add mathbf xwi hatwi y to the test set Otherwise we repeat the above procedure if the label still has not changed after masking up to 5 words we discard the corresponding entry All soobtained test set entries mathbf xwi1 hatwi1 ldots wik hatwik y have the following properties If each wij is replaced by a MASK token the entry is classified incorrectly by M In other words understanding the words wij is essential for M to determine the correct label If the models internal representation of each hatwij is equal to its representation of wij the entry is classified correctly by M That is if the model is able to understand the rare words hatwij and to identify them as synonyms of wij it predicts the correct label for each instance It is important to notice that the soobtained test set is very closely coupled to the baseline model M because we selected the words to replace based on the models predictions Importantly however the model is never queried with any rare synonym during test set generation so its representations of rare words are not taken into account for creating the test set Thus while the test set is not suitable for comparing M with an entirely different model Mprime it allows us to compare various strategies for representing rare words in the embedding space of M A similar constraint can be found in the Definitional Nonce dataset BIBREF3 which is tied to a given embedding space based on Word2Vec BIBREF1 For our evaluation of Bertram we largely follow the experimental setup of BIBREF0 Our implementation of Bertram is based on PyTorch BIBREF30 and the Transformers library of BIBREF31 Throughout all of our experiments we use BERTtextbase as the underlying language model for Bertram To obtain embeddings for frequent multitoken words during training we use onetoken approximation BIBREF0 Somewhat surprisingly we found in preliminary experiments that excluding BERTs parameters from the finetuning procedure outlined in Section SECREF17 improves performance while speeding up training we thus exclude them in the third step of our training procedure While BERT was trained on BooksCorpus BIBREF32 and a large Wikipedia dump we follow previous work and train Bertram on only the much smaller Westbury Wikipedia Corpus WWC BIBREF33 this of course gives BERT a clear advantage over our proposed method In order to at least partially compensate for this in our downstream task experiments we gather the set of contexts mathcal C for a given rare word from both the WWC and BooksCorpus during inference We evalute Bertram on the WNLaMPro dataset of BIBREF0 This dataset consists of clozestyle phrases like and the task is to correctly fill the slot with one of several acceptable target words eg fruit bush and berry which requires knowledge of the phrases keyword lingonberry in the above example As the goal of this dataset is to probe a language models ability to understand rare words without any taskspecific finetuning BIBREF0 do not provide a training set Furthermore the dataset is partitioned into three subsets this partition is based on the frequency of the keyword with keywords occurring less than 10 times in the WWC forming the rare subset those occurring between 10 and 100 times forming the medium subset and all remaining words forming the frequent subset As our focus is on improving representations for rare words we evaluate our model only on the former two sets Results on WNLaMPro rare and medium are shown in Table TABREF34 where the mean reciprocal rank MRR is reported for BERT Attentive Mimicking and Bertram As can be seen supplementing BERT with any of the proposed relearning methods results in noticeable improvements for the rare subset with add clearly outperforming replace Moreover the add and addgated variants of Bertram perform surprisingly well for more frequent words improving the score for WNLaMPromedium by 50 compared to BERTtextbase and 31 compared to Attentive Mimicking This makes sense considering that compared to Attentive Mimicking the key enhancement of Bertram lies in improving context representations and interconnection of form and context naturally the more contexts are given the more this comes into play Noticeably despite being both based on and integrated into a BERTtextbase model our architecture even outperforms a standalone BERTtextlarge model by a large margin To measure the effect of adding Bertram to BERT on downstream tasks we apply the procedure described in Section SECREF4 to a commonly used textual entailment dataset as well as two text classification datasets MNLI BIBREF21 AGs News BIBREF22 and DBPedia BIBREF23 For all three datasets we use BERTtextbase as a baseline model and create the substitution dictionary S using the synonym relation of WordNet BIBREF20 and the pattern library BIBREF34 to make sure that all synonyms have consistent parts of speech As an additional source of word substitutions we make use of the misspellings dataset of BIBREF25 which is based on query logs of a search engine To prevent misspellings from dominating the resulting dataset we only assign misspellingbased substitutes to randomly selected 10 of the words contained in each sentence Motivated by the results on WNLaMPromedium we consider every word that occurs less than 100 times in the WWC and our BooksCorpus replica combined as being rare Some examples of entries in the resulting datasets can be seen in Table TABREF35 Just like for WNLaMPro our default way of injecting Bertram embeddings into the baseline model is to replace the sequence of uncontextualized WordPiece tokens for a given rare word with its Bertrambased embedding That is given a sequence of uncontextualized token embeddings mathbf e e1 ldots en where ei ldots eij with 1 le i le ij le n is the sequence of WordPiece embeddings for a single rare word w we replace mathbf e with By default the set of contexts mathcal C required for this replacement is obtained by collecting all sentences from the WWC and BooksCorpus in which w occurs As our model architecture allows us to easily include new contexts without requiring any additional training we also try a variant where we add indomain contexts by giving the model access to the texts found in the test set In addition to the procedure described above we also try a variant where instead of replacing the original WordPiece embeddings for a given rare word we merely add the Bertrambased embedding separating both representations using a single slash As it performs best on the rare and medium subsets of WNLaMPro combined we use only the addgated variant of Bertram for all datasets Results can be seen in Table TABREF37 where for each task we report the accuracy on the entire dataset as well as scores obtained considering only instances where at least one word was replaced by a misspelling or a WordNet synonym respectively Consistent with results on WNLaMPro combining BERT with Bertram outperforms both a standalone BERT model and one combined with Attentive Mimicking across all tasks While keeping the original BERT embeddings in addition to Bertrams representation brings no benefit adding indomain data clearly helps for two out of three datasets This makes sense as for rare words every single additional context can be crucial for gaining a deeper understanding To further understand for which words using Bertram is helpful in Figure FIGREF39 we look at the accuracy of BERT both with and without Bertram on all three tasks as a function of word frequency That is we compute the accuracy scores for both models when considering only entries mathbf xwi1 hatwi1 ldots wik hatwik y where each substituted word hatwij occurs less than ctextmax times in WWC and BooksCorpus for various values of ctextmax As one would expect ctextmax is positively correlated with the accuracies of both models showing that the rarer a word is the harder it is to understand Perhaps more interestingly for all three datasets the gap between Bertram and BERT remains more or less constant regardless of ctextmax This indicates that using Bertram might also be useful for even more frequent words than the ones considered We have introduced Bertram a novel architecture for relearning highquality representations of rare words This is achieved by employing a powerful pretrained language model and deeply connecting surfaceform and context information By replacing important words with rare synonyms we have created various downstream task datasets focusing on rare words on all of these datasets Bertram improves over a BERT model without special handling of rare words demonstrating the usefulness of our proposed method As our analysis has shown that even for the most frequent words considered using Bertram is still beneficial future work might further investigate the limits of our proposed method Furthermore it would be interesting to explore more complex ways of incorporating surfaceform information eg by using a characterlevel CNN similar to the one of BIBREF27 to balance out the potency of Bertrams form and context parts </s>
<s>[INST] Joint Entity Linking with Deep Reinforcement Learning [/INST] Entity Linking EL which is also called Entity Disambiguation ED is the task of mapping mentions in text to corresponding entities in a given knowledge Base KB This task is an important and challenging stage in text understanding because mentions are usually ambiguous ie different named entities may share the same surface form and the same entity may have multiple aliases EL is key for information retrieval IE and has many applications such as knowledge base population KBP question answering QA etc Existing EL methods can be divided into two categories local model and global model Local models concern mainly on contextual words surrounding the mentions where mentions are disambiguated independently These methods are not work well when the context information is not rich enough Global models take into account the topical coherence among the referred entities within the same document where mentions are disambiguated jointly Most of previous global models BIBREF0 BIBREF1 BIBREF2 calculate the pairwise scores between all candidate entities and select the most relevant group of entities However the consistency among wrong entities as well as that among right ones are involved which not only increases the model complexity but also introduces some noises For example in Figure 1 there are three mentions France Croatia and 2018 World Cup and each mention has three candidate entities Here France may refer to French Republic France national basketball team or France national football team in KB It is difficult to disambiguate using local models due to the scarce common information in the contextual words of France and the descriptions of its candidate entities Besides the topical coherence among the wrong entities related to basketball team linked by an orange dashed line may make the global models mistakenly refer France to France national basketball team So how to solve these problems We note that mentions in text usually have different disambiguation difficulty according to the quality of contextual information and the topical coherence Intuitively if we start with mentions that are easier to disambiguate and gain correct results it will be effective to utilize information provided by previously referred entities to disambiguate subsequent mentions In the above example it is much easier to map 2018 World Cup to 2018 FIFA World Cup based on their common contextual words France Croatia 42 Then it is obvious that France and Croatia should be referred to the national football team because footballrelated terms are mentioned many times in the description of 2018 FIFA World Cup Inspired by this intuition we design the solution with three principles i utilizing local features to rank the mentions in text and deal with them in a sequence manner ii utilizing the information of previously referred entities for the subsequent entity disambiguation iii making decisions from a global perspective to avoid the error propagation if the previous decision is wrong In order to achieve these aims we consider global EL as a sequence decision problem and proposed a deep reinforcement learning RL based model RLEL for short which consists of three modules Local Encoder Global Encoder and Entity Selector For each mention and its candidate entities Local Encoder encodes the local features to obtain their latent vector representations Then the mentions are ranked according to their disambiguation difficulty which is measured by the learned vector representations In order to enforce global coherence between mentions Global Encoder encodes the local representations of mentionentity pairs in a sequential manner via a LSTM network which maintains a longterm memory on features of entities which has been selected in previous states Entity Selector uses a policy network to choose the target entities from the candidate set For a single disambiguation decision the policy network not only considers the pairs of current mentionentity representations but also concerns the features of referred entities in the previous states which is pursued by the Global Encoder In this way Entity Selector is able to take actions based on the current state and previous ones When eliminating the ambiguity of all mentions in the sequence delayed rewards are used to adjust its policy in order to gain an optimized global decision Deep RL model which learns to directly optimize the overall evaluation metrics works much better than models which learn with loss functions that just evaluate a particular single decision By this property RL has been successfully used in many NLP tasks such as information retrieval BIBREF3 dialogue system BIBREF4 and relation classification BIBREF5 etc To the best of our knowledge we are the first to design a RL model for global entity linking And in this paper our RL model is able to produce more accurate results by exploring the longterm influence of independent decisions and encoding the entities disambiguated in previous states In summary the main contributions of our paper mainly include following aspects The overall structure of our RLEL model is shown in Figure 2 The proposed framework mainly includes three parts Local Encoder which encodes local features of mentions and their candidate entities Global Encoder which encodes the global coherence of mentions in a sequence manner and Entity Selector which selects an entity from the candidate set As the Entity Selector and the Global Encoder are correlated mutually we train them jointly Moreover the Local Encoder as the basis of the entire framework will be independently trained before the joint training process starts In the following we will introduce the technical details of these modules Before introducing our model we firstly define the entity linking task Formally given a document D with a set of mentions M lbrace m1 m2mkrbrace each mention mt in D has a set of candidate entities Cmt lbrace et1 et2 etnrbrace The task of entity linking is to map each mention mt to its corresponding correct target entity et or return NIL if there is not correct target entity in the knowledge base Before selecting the target entity we need to generate a certain number of candidate entities for model selection Inspired by the previous works BIBREF6 BIBREF7 BIBREF8 we use the mentions redirect and disambiguation pages in Wikipedia to generate candidate sets For those mentions without corresponding disambiguation pages we use its ngrams to retrieve the candidates BIBREF8 In most cases the disambiguation page contains many entities sometimes even hundreds To optimize the models memory and avoid unnecessary calculations the candidate sets need to be filtered BIBREF9 BIBREF0 BIBREF1 Here we utilize the XGBoost model BIBREF10 as an entity ranker to reduce the size of candidate set The features used in XGBoost can be divided into two aspects the one is string similarity like the JaroWinkler distance between the entity title and the mention the other is semantic similarity like the cosine distance between the mention context representation and the entity embedding Furthermore we also use the statistical features based on the pageview and hyperlinks in Wikipedia Empirically we get the pageview of the entity from the Wikipedia Tool Labs which counts the number of visits on each entity page in Wikipedia After ranking the candidate sets based on the above features we take the top k scored entities as final candidate set for each mention Given a mention mt and the corresponding candidate set lbrace et1 et2 etkrbrace we aim to get their local representation based on the mention context and the candidate entity description For each mention we firstly select its n surrounding words and represent them as word embedding using a pretrained lookup table BIBREF11 Then we use Long ShortTerm Memory LSTM networks to encode the contextual word sequence lbrace wc1 wc2 wcnrbrace as a fixedsize vector Vmt The description of entity is encoded as Deti in the same way Apart from the description of entity there are many other valuable information in the knowledge base To make full use of these information many researchers trained entity embeddings by combining the description category and relationship of entities As shown in BIBREF0 entity embeddings compress the semantic meaning of entities and drastically reduce the need for manually designed features or cooccurrence statistics Therefore we use the pretrained entity embedding Eeti and concatenate it with the description vector Deti to enrich the entity representation The concatenation result is denoted by Veti After getting Veti we concatenate it with Vmt and then pass the concatenation result to a multilayer perceptron MLP The MLP outputs a scalar to represent the local similarity between the mention mt and the candidate entity eti The local similarity is calculated by the following equations Psi mt eti MLPVmtoplus Veti Eq 9 Where oplus indicates vector concatenation With the purpose of distinguishing the correct target entity and wrong candidate entities when training the local encoder model we utilize a hinge loss that ranks ground truth higher than others The rank loss function is defined as follows Llocal max0 gamma Psi mt etPsi mt et Eq 10 When optimizing the objective function we minimize the rank loss similar to BIBREF0 BIBREF1 In this ranking model a training instance is constructed by pairing a positive target entity et with a negative entity et Where gamma 0 is a margin parameter and our purpose is to make the score of the positive target entity et is at least a margin gamma higher than that of negative candidate entity et With the local encoder we obtain the representation of mention context and candidate entities which will be used as the input into the global encoder and entity selector In addition the similarity scores calculated by MLP will be utilized for ranking mentions in the global encoder In the global encoder module we aim to enforce the topical coherence among the mentions and their target entities So we use an LSTM network which is capable of maintaining the longterm memory to encode the ranked mention sequence What we need to emphasize is that our global encoder just encode the mentions that have been disambiguated by the entity selector which is denoted as Vat As mentioned above the mentions should be sorted according to their contextual information and topical coherence So we firstly divide the adjacent mentions into a segment by the order they appear in the document based on the observation that the topical consistency attenuates along with the distance between the mentions Then we sort mentions in a segment based on the local similarity and place the mention that has a higher similarity value in the front of the sequence In Equation 1 we define the local similarity of mi and its corresponding candidate entity eti On this basis we define Psi maxmi eia as the the maximum local similarity between the mi and its candidate set Cmi lbrace ei1 ei2 einrbrace We use Psi maxmi eia as criterion when sorting mentions For instance if Psi maxmi eia Psi maxmj ejb then we place mi before mj Under this circumstances the mentions in the front positions may not be able to make better use of global consistency but their target entities have a high degree of similarity to the context words which allows them to be disambiguated without relying on additional information In the end previous selected target entity information is encoded by global encoder and the encoding result will be served as input to the entity selector Before using entity selector to choose target entities we pretrained the global LSTM network During the training process we input not only positive samples but also negative ones to the LSTM By doing this we can enhance the robustness of the network In the global encoder module we adopt the following cross entropy loss function to train the model Lglobal frac1nsum xleftyln yprime 1yln 1yprime right Eq 12 Where yin lbrace 01rbrace represents the label of the candidate entity If the candidate entity is correct y1 otherwise y0 yprime in 01 indicates the output of our model After pretraining the global encoder we start using the entity selector to choose the target entity for each mention and encode these selections In the entity selector module we choose the target entity from candidate set based on the results of local and global encoder In the process of sequence disambiguation each selection result will have an impact on subsequent decisions Therefore we transform the choice of the target entity into a reinforcement learning problem and view the entity selector as an agent In particular the agent is designed as a policy network which can learn a stochastic policy and prevents the agent from getting stuck at an intermediate state BIBREF12 Under the guidance of policy the agent can decide which action choosing the target entity from the candidate setshould be taken at each state and receive a delay reward when all the selections are made In the following part we first describe the state action and reward Then we detail how to select target entity via a policy network The result of entity selection is based on the current state information For time t the state vector St is generated as follows St Vmitoplus Veitoplus Vfeaturetoplus Vet1 Eq 15 Where oplus indicates vector concatenation The Vmit and Veit respectively denote the vector of mi and ei at time t For each mention there are multiple candidate entities correspond to it With the purpose of comparing the semantic relevance between the mention and each candidate entity at the same time we copy multiple copies of the mention vector Formally we extend Vmit in mathbb R1times n to Vmitprime in mathbb Rktimes n and then combine it with Veit in mathbb Rktimes n Since Vmit and Vmit0 are mainly to represent semantic information we add feature vector Vmit1 to enrich lexical and statistical features These features mainly include the popularity of the entity the edit distance between the entity description and the mention context the number of identical words in the entity description and the mention context etc After getting these feature values we combine them into a vector and add it to the current state In addition the global vector Vmit2 is also added to Vmit3 As mentioned in global encoder module Vmit4 is the output of global LSTM network at time Vmit5 which encodes the mention context and target entity information from Vmit6 to Vmit7 Thus the state Vmit8 contains current information and previous decisions while also covering the semantic representations and a variety of statistical features Next the concatenated vector will be fed into the policy network to generate action According to the status at each time step we take corresponding action Specifically we define the action at time step t is to select the target entity et for mt The size of action space is the number of candidate entities for each mention where ai in lbrace 012krbrace indicates the position of the selected entity in the candidate entity list Clearly each action is a direct indicator of target entity selection in our model After completing all the actions in the sequence we will get a delayed reward The agent takes the reward value as the feedback of its action and learns the policy based on it Since current selection result has a longterm impact on subsequent decisions we dont give an immediate reward when taking an action Instead a delay reward is given by follows which can reflect whether the action improves the overall performance or not Rat patsum jtTpaj 1 patsum jtTpaj t T Eq 16 where patin lbrace 01rbrace indicates whether the current action is correct or not When the action is correct pat1 otherwise pat0 Hence sum jtTpaj and sum jtTpaj t T respectively represent the number of correct and wrong actions from time t to the end of episode Based on the above definition our delayed reward can be used to guide the learning of the policy for entity linking After defining the state action and reward our main challenge becomes to choose an action from the action space To solve this problem we sample the value of each action by a policy network pi Theta as The structure of the policy network is shown in Figure 3 The input of the network is the current state including the mention context representation candidate entity representation feature representation and encoding of the previous decisions We concatenate these representations and fed them into a multilayer perceptron for each hidden layer we generate the output by hiSt ReluWihi1St bi Eq 17 Where Wi and bi are the parameters of the i th hidden layer through the relu activation function we get the hiSt After getting the output of the last hidden layer we feed it into a softmax layer which generates the probability distribution of actions The probability distribution is generated as follows pi as SoftmaxW hlS b Eq 18 Where the W and b are the parameters of the softmax layer For each mention in the sequence we will take action to select the target entity from its candidate set After completing all decisions in the episode each action will get an expected reward and our goal is to maximize the expected total rewards Formally the objective function is defined as beginsplit
JTheta mathbb Est atsim PTheta st atRs1a1sLaL
sum tsum api Theta asRat
endsplit Eq 19 Where PTheta st at is the state transfer function pi Theta as indicates the probability of taking action a under the state s Rat is the expected reward of action a at time step t According to REINFORCE policy gradient algorithm BIBREF13 we update the policy gradient by the way of equation 9 Theta leftarrow Theta alpha sum tRatnabla Theta log pi Theta as Eq 20 As the global encoder and the entity selector are correlated mutually we train them jointly after pretraining the two networks The details of the joint learning are presented in Algorithm 1 t The Policy Learning for Entity Selector 1 Training data include multiple documents D lbrace D1 D2 DNrbrace The target entity for mentions Gamma lbrace T1 T2 TNrbrace Initialize the policy network parameter Theta global LSTM network parameter Phi Dk in D Generate the candidate set for each mention Divide the mentions in Dk into multiple sequences S lbrace S1 S2 SNrbrace Sk in S Rank the mentions M lbrace m1 m2 mnrbrace in Sk based on the local similarity Phi 0 in Phi 1 Sample the target entity Phi 2 for Phi 3 with Phi 4 Input the Phi 5 and Phi 6 to global LSTM network Phi 7 End of sampling update parameters Compute delayed reward Phi 8 for each action Update the parameter Phi 9 of policy network Theta leftarrow Theta alpha sum tRatnabla Theta log pi Theta as Update the parameter Phi in the global LSTM network In order to evaluate the effectiveness of our method we train the RLEL model and validate it on a series of popular datasets that are also used by BIBREF0 BIBREF1 To avoid overfitting with one dataset we use both AIDATrain and Wikipedia data in the training set Furthermore we compare the RLEL with some baseline methods where our model achieves the stateoftheart results We implement our models in Tensorflow and run experiments on 4 Tesla V100 GPU We conduct experiments on several different types of public datasets including news and encyclopedia corpus The training set is AIDATrain and Wikipedia datasets where AIDATrain contains 18448 mentions and Wikipedia contains 25995 mentions In order to compare with the previous methods we evaluate our model on AIDAB and other datasets These datasets are wellknown and have been used for the evaluation of most entity linking systems The statistics of the datasets are shown in Table 1 AIDACoNLL BIBREF14 is annotated on Reuters news articles It contains training AIDATrain validation AIDAA and test AIDAB sets ACE2004 BIBREF15 is a subset of the ACE2004 Coreference documents MSNBC BIBREF16 contains top two stories in the ten news categoriesPolitics Business Sports etc AQUAINT BIBREF17 is a news corpus from the Xinhua News Service the New York Times and the Associated Press WNEDCWEB BIBREF18 is randomly picked from the FACC1 annotated ClueWeb 2012 dataset WNEDWIKI BIBREF18 is crawled from Wikipedia pages with its original hyperlink annotation OURSELFWIKI is crawled by ourselves from Wikipedia pages During the training of our RLEL model we select top K candidate entities for each mention to optimize the memory and run time In the top K candidate list we define the recall of correct target entity is Rt According to our statistics when K is set to 1 Rt is 0853 when K is 5 Rt is 0977 when K increases to 10 Rt is 0993 Empirically we choose top 5 candidate entities as the input of our RLEL model For the entity description there are lots of redundant information in the wikipedia page to reduce the impact of noise data we use TextRank algorithm BIBREF19 to select 15 keywords as description of the entity Simultaneously we choose 15 words around mention as its context In the global LSTM network when the number of mentions does not reach the set length we adopt the mention padding strategy In short we copy the last mention in the sequence until the number of mentions reaches the set length We set the dimensions of word embedding and entity embedding to 300 where the word embedding and entity embedding are released by BIBREF20 and BIBREF0 respectively For parameters of the local LSTM network the number of LSTM cell units is set to 512 the batch size is 64 and the rank margin gamma is 01 Similarly in global LSTM network the number of LSTM cell units is 700 and the batch size is 16 In the above two LSTM networks the learning rate is set to 1e3 the probability of dropout is set to 08 and the Adam is utilized as optimizer In addition we set the number of MLP layers to 4 and extend the priori feature dimension to 50 in the policy network We compare RLEL with a series of EL systems which report stateoftheart results on the test datasets There are various methods including classification model BIBREF17 rank model BIBREF21 BIBREF15 and probability graph model BIBREF18 BIBREF14 BIBREF22 BIBREF0 BIBREF1 Except that Cheng et al BIBREF23 formulate their global decision problem as an Integer Linear Program ILP which incorporates the entityrelation inference Globerson et al BIBREF24 introduce a multifocal attention model which allows each candidate to focus on limited mentions Yamada et al BIBREF25 propose a word and entity embedding model specifically designed for EL We use the standard Accuracy Precision Recall and F1 at mention level Micro as the evaluation metrics Accuracy fracM cap MM cup M Eq 31 Precision fracM cap MM Eq 32 where M is the golden standard set of the linked name mentions M is the set of linked name mentions outputted by an EL method Same as previous work we use inKB accuracy and micro F1 to evaluate our method We first test the model on the AIDAB dataset From Table 2 we can observe that our model achieves the best result Previous best results on this dataset are generated by BIBREF0 BIBREF1 which both built CRF models They calculate the pairwise scores between all candidate entities Differently our model only considers the consistency of the target entities and ignores the relationship between incorrect candidates The experimental results show that our model can reduce the impact of noise data and improve the accuracy of disambiguation Apart from experimenting on AIDAB we also conduct experiments on several different datasets to verify the generalization performance of our model From Table 3 we can see that RLEL has achieved relatively good performances on ACE2004 CWEB and WIKI At the same time previous models BIBREF0 BIBREF1 BIBREF23 achieve better performances on the news datasets such as MSNBC and AQUINT but their results on encyclopedia datasets such as WIKI are relatively poor To avoid overfitting with some datasets and improve the robustness of our model we not only use AIDATrain but also add Wikipedia data to the training set In the end our model achieve the best overall performance For most existing EL systems entities with lower frequency are difficult to disambiguate To gain further insight we analyze the accuracy of the AIDAB dataset for situations where gold entities have low popularity We divide the gold entities according to their pageviews in wikipedia the statistical disambiguation results are shown in Table 4 Since some pageviews can not be obtained we only count part of gold entities The result indicates that our model is still able to work well for lowfrequency entities But for mediumfrequency gold entities our model doesnt work well enough The most important reason is that other candidate entities corresponding to these mediumfrequency gold entities have higher pageviews and local similarities which makes the model difficult to distinguish To demonstrate the effects of RLEL we evaluate our model under different conditions First we evaluate the effect of sequence length on global decision making Second we assess whether sorting the mentions have a positive effect on the results Third we analysis the results of not adding globally encoding during entity selection Last we compare our RL selection strategy with the greedy choice A document may contain multiple topics so we do not add all mentions to a single sequence In practice we add some adjacent mentions to the sequence and use reinforcement learning to select entities from beginning to end To analysis the impact of the number of mentions on joint disambiguation we experiment with sequences on different lengths The results on AIDAB are shown in Figure 4 We can see that when the sequence is too short or too long the disambiguation results are both very poor When the sequence length is less than 3 delay reward cant work in reinforcement learning and when the sequence length reaches 5 or more noise data may be added Finally we choose the 4 adjacent mentions to form a sequence In this section we test whether ranking mentions is helpful for entity selections At first we directly input them into the global encoder by the order they appear in the text We record the disambiguation results and compare them with the method which adopts ranking mentions As shown in Figure 5a the model with ranking mentions has achieved better performances on most of datasets indicating that it is effective to place the mention that with a higher local similarity in front of the sequence It is worth noting that the effect of ranking mentions is not obvious on the MSNBC dataset the reason is that most of mentions in MSNBC have similar local similarities the order of disambiguation has little effect on the final result Most of previous methods mainly use the similarities between entities to correlate each other but our model associates them by encoding the selected entity information To assess whether the global encoding contributes to disambiguation rather than add noise we compare the performance with and without adding the global information When the global encoding is not added the current state only contains the mention context representation candidate entity representation and feature representation notably the selected target entity information is not taken into account From the results in Figure 5b we can see that the model with global encoding achieves an improvement of 4 accuracy over the method that without global encoding To illustrate the necessity for adopting the reinforcement learning for entity selection we compare two entity selection strategies like BIBREF5 Specifically we perform entity selection respectively with reinforcement learning and greedy choice The greedy choice is to select the entity with largest local similarity from candidate set But the reinforcement learning selection is guided by delay reward which has a global perspective In the comparative experiment we keep the other conditions consistent just replace the RL selection with a greedy choice Based on the results in Figure 5c we can draw a conclusion that our entity selector perform much better than greedy strategies Table 5 shows two entity selection examples by our RLEL model For multiple mentions appearing in the document we first sort them according to their local similarities and select the target entities in order by the reinforcement learning model From the results of sorting and disambiguation we can see that our model is able to utilize the topical consistency between mentions and make full use of the selected target entity information The related work can be roughly divided into two groups entity linking and reinforcement learning Entity linking falls broadly into two major approaches local and global disambiguation Early studies use local models to resolve mentions independently they usually disambiguate mentions based on lexical matching between the mentions surrounding words and the entity profile in the reference KB Various methods have been proposed to model mentions local context ranging from binary classification BIBREF17 to rank models BIBREF26 BIBREF27 In these methods a large number of handdesigned features are applied For some marginal mentions that are difficult to extract features researchers also exploit the data retrieved by search engines BIBREF28 BIBREF29 or Wikipedia sentences BIBREF30 However the feature engineering and search engine methods are both timeconsuming and laborious Recently with the popularity of deep learning models representation learning is utilized to automatically find semantic features BIBREF31 BIBREF32 The learned entity representations which by jointly modeling textual contexts and knowledge base are effective in combining multiple sources of information To make full use of the information contained in representations we also utilize the pretrained entity embeddings in our model In recent years with the assumption that the target entities of all mentions in a document shall be related many novel global models for joint linking are proposed Assuming the topical coherence among mentions authors in BIBREF33 BIBREF34 construct factor graph models which represent the mention and candidate entities as variable nodes and exploit factor nodes to denote a series of features Two recent studies BIBREF0 BIBREF1 use fullyconnected pairwise Conditional Random FieldCRF model and exploit loopy belief propagation to estimate the maxmarginal probability Moreover PageRank or Random Walk BIBREF35 BIBREF18 BIBREF7 are utilized to select the target entity for each mention The above probabilistic models usually need to predefine a lot of features and are difficult to calculate the maxmarginal probability as the number of nodes increases In order to automatically learn features from the data Cao et al BIBREF9 applies Graph Convolutional Network to flexibly encode entity graphs However the graphbased methods are computationally expensive because there are lots of candidate entity nodes in the graph To reduce the calculation between candidate entity pairs Globerson et al BIBREF24 introduce a coherence model with an attention mechanism where each mention only focus on a fixed number of mentions Unfortunately choosing the number of attention mentions is not easy in practice Two recent studies BIBREF8 BIBREF36 finish linking all mentions by scanning the pairs of mentions at most once they assume each mention only needs to be consistent with one another mention in the document The limitation of their method is that the consistency information is too sparse resulting in low confidence Similar to us Guo et al BIBREF18 also sort mentions according to the difficulty of disambiguation but they did not make full use of the information of previously referred entities for the subsequent entity disambiguation Nguyen et al BIBREF2 use the sequence model but they simply encode the results of the greedy choice and measure the similarities between the global encoding and the candidate entity representations Their model does not consider the longterm impact of current decisions on subsequent choices nor does they add the selected target entity information to the current state to help disambiguation In the last few years reinforcement learning has emerged as a powerful tool for solving complex sequential decisionmaking problems It is well known for its great success in the game field such as Go BIBREF37 and Atari games BIBREF38 Recently reinforcement learning has also been successfully applied to many natural language processing tasks and achieved good performance BIBREF12 BIBREF39 BIBREF5 Feng et al BIBREF5 used reinforcement learning for relation classification task by filtering out the noisy data from the sentence bag and they achieved huge improvements compared with traditional classifiers Zhang et al BIBREF40 applied the reinforcement learning on sentence representation by automatically discovering taskrelevant structures To automatic taxonomy induction from a set of terms Han et al BIBREF41 designed an endtoend reinforcement learning model to determine which term to select and where to place it on the taxonomy which effectively reduced the error propagation between two phases Inspired by the above works we also add reinforcement learning to our framework In this paper we consider entity linking as a sequence decision problem and present a reinforcement learning based model Our model learns the policy on selecting target entities in a sequential manner and makes decisions based on current state and previous ones By utilizing the information of previously referred entities we can take advantage of global consistency to disambiguate mentions For each selection result in the current state it also has a longterm impact on subsequent decisions which allows learned policy strategy has a global view In experiments we evaluate our method on AIDAB and other wellknown datasets the results show that our system outperforms stateoftheart solutions In the future we would like to use reinforcement learning to detect mentions and determine which mention should be firstly disambiguated in the document This research is supported by the GS501100001809National Key Research and Development Program of China No GS5011000018092018YFB1004703 GS501100001809the Beijing Municipal Science and Technology Project under grant No GS501100001809 Z181100002718004 and GS501100001809the National Natural Science Foundation of China grantsNo GS50110000180961602466 </s>
<s>[INST] Classification Betters Regression in Query-based Multi-document Summarisation Techniques for Question Answering: Macquarie University at BioASQ7b [/INST] The BioASQ Challenge includes a question answering task Phase B part B where the aim is to find the ideal answer that is an answer that would normally be given by a person BIBREF0 This is in contrast with most other question answering challenges where the aim is normally to give an exact answer usually a factbased answer or a list Given that the answer is based on an input that consists of a biomedical question and several relevant PubMed abstracts the task can be seen as an instance of querybased multidocument summarisation As in past participation BIBREF1 BIBREF2 we wanted to test the use of deep learning and reinforcement learning approaches for extractive summarisation In contrast with past years where the training procedure was based on a regression set up this year we experiment with various classification set ups The main contributions of this paper are We compare classification and regression approaches and show that classification produces better results than regression but the quality of the results depends on the approach followed to annotate the data labels We conduct correlation analysis between various ROUGE evaluation metrics and the human evaluations conducted at BioASQ and show that Precision and F1 correlate better than Recall Section SECREF2 briefly introduces some related work for context Section SECREF3 describes our classification and regression experiments Section SECREF4 details our experiments using deep learning architectures Section SECREF5 explains the reinforcement learning approaches Section SECREF6 shows the results of our correlation analysis between ROUGE scores and human annotations Section SECREF7 lists the specific runs submitted at BioASQ 7b Finally Section SECREF8 concludes the paper The BioASQ challenge has organised annual challenges on biomedical semantic indexing and question answering since 2013 BIBREF0 Every year there has been a task about semantic indexing task a and another about question answering task b and occasionally there have been additional tasks The tasks defined for 2019 are Large Scale Online Biomedical Semantic Indexing Biomedical Semantic QA involving Information Retrieval IR Question Answering QA and Summarisation Medical Semantic Indexing in Spanish BioASQ Task 7b consists of two phases Phase A provides a biomedical question as an input and participants are expected to find relevant concepts from designated terminologies and ontologies relevant articles from PubMed relevant snippets from the relevant articles and relevant RDF triples from designated ontologies Phase B provides a biomedical question and a list of relevant articles and snippets and participant systems are expected to return the exact answers and the ideal answers The training data is composed of the test data from all previous years and amounts to 2747 samples There has been considerable research on the use of machine learning approaches for tasks related to text summarisation especially on singledocument summarisation Abstractive approaches normally use an encoderdecoder architecture and variants of this architecture incorporate attention BIBREF3 and pointergenerator BIBREF4 Recent approaches leveraged the use of pretrained models BIBREF5 Recent extractive approaches to summarisation incorporate recurrent neural networks that model sequences of sentence extractions BIBREF6 and may incorporate an abstractive component and reinforcement learning during the training stage BIBREF7 But relatively few approaches have been proposed for querybased multidocument summarisation Table TABREF8 summarises the approaches presented in the proceedings of the 2018 BioASQ challenge Our past participation in BioASQ BIBREF1 BIBREF2 and this paper focus on extractive approaches to summarisation Our decision to focus on extractive approaches is based on the observation that a relatively large number of sentences from the input snippets has very high ROUGE scores thus suggesting that human annotators had a general tendency to copy text from the input to generate the target summaries BIBREF1 Our past participating systems used regression approaches using the following framework Train the regressor to predict the ROUGESU4 F1 score of the input sentence Produce a summary by selecting the top n input sentences A novelty in the current participation is the introduction of classification approaches using the following framework Train the classifier to predict the target label summary or not summary of the input sentence Produce a summary by selecting all sentences predicted as summary If the total number of sentences selected is less than n select n sentences with higher probability of label summary Introducing a classifier makes labelling the training data not trivial since the target summaries are humangenerated and they do not have a perfect mapping to the input sentences In addition some samples have multiple reference summaries BIBREF11 showed that different data labelling approaches influence the quality of the final summary and some labelling approaches may lead to better results than using regression In this paper we experiment with the following labelling approaches Label as summary all sentences from the input text that have a ROUGE score above a threshold t Label as summary the m input text sentences with highest ROUGE score As in BIBREF11 The ROUGE score of an input sentence was the ROUGESU4 F1 score of the sentence against the set of reference summaries We conducted crossvalidation experiments using various values of t and m Table TABREF26 shows the results for the best values of t and m obtained The regressor and classifier used Support Vector Regression SVR and Support Vector Classification SVC respectively To enable a fair comparison we used the same input features in all systems These input features combine information from the question and the input sentence and are shown in Fig FIGREF16 The features are based on BIBREF12 and are the same as in BIBREF1 plus the addition of the position of the input snippet The best SVC and SVR parameters were determined by grid search Preliminary experiments showed a relatively high number of cases where the classifier did not classify any of the input sentences as summary To solve this problem and as mentioned above the summariser used in Table TABREF26 introduces a backoff step that extracts the n sentences with highest predicted values when the summary has less than n sentences The value of n is as reported in our prior work and shown in Table TABREF25 The results confirm BIBREF11s finding that classification outperforms regression However the actual choice of optimal labelling scheme was different whereas in BIBREF11 the optimal labelling was based on a labelling threshold of 01 our experiments show a better result when using the top 5 sentences as the target summary The reason for this difference might be the fact that BIBREF11 used all sentences from the abstracts of the relevant PubMed articles whereas we use only the snippets as the input to our summariser Consequently the number of input sentences is now much smaller We therefore report the results of using the labelling schema of top 5 snippets in all subsequent classifierbased experiments of this paper barchartfillblack20drawblack errorbarvery thindrawblack75 sscalevery thindrawblack75 Based on the findings of Section SECREF3 we apply minimal changes to the deep learning regression models of BIBREF2 to convert them to classification models In particular we add a sigmoid activation to the final layer and use crossentropy as the loss function The complete architecture is shown in Fig FIGREF28 The bottom section of Table TABREF26 shows the results of several variants of the neural architecture The table includes a neural regressor NNR and a neural classifier NNC The neural classifier is trained in two set ups NNC top 5 uses classification labels as described in Section SECREF3 and NNC SU4 F1 uses the regression labels that is the ROUGESU4 F1 scores of each sentence Of interest is the fact that NNC SU4 F1 outperforms the neural regressor We have not explored this further and we presume that the relatively good results are due to the fact that ROUGE values range between 0 and 1 which matches the full range of probability values that can be returned by the sigmoid activation of the classifier final layer Table TABREF26 also shows the standard deviation across the crossvalidation folds Whereas this standard deviation is fairly large compared with the differences in results in general the results are compatible with the top part of the table and prior work suggesting that classificationbased approaches improve over regressionbased approaches We also experiment with the use of reinforcement learning techniques Again these experiments are based on BIBREF2 who uses REINFORCE to train a global policy The policy predictor uses a simple feedforward network with a hidden layer The results reported by BIBREF2 used ROUGE Recall and indicated no improvement with respect to deep learning architectures Human evaluation results are preferable over ROUGE but these were made available after the publication of the paper When comparing the ROUGE and human evaluation results Table TABREF29 we observe an inversion of the results In particular the reinforcement learning approaches RL of BIBREF2 receive good human evaluation results and as a matter of fact they are the best of our runs in two of the batches In contrast the regression systems NNR fare relatively poorly Section SECREF6 expands on the comparison between the ROUGE and human evaluation scores Encouraged by the results of Table TABREF29 we decided to continue with our experiments with reinforcement learning We use the same features as in BIBREF2 namely the length in number of sentences of the summary generated so far plus the tfidf vectors of the following Candidate sentence Entire input to summarise Summary generated so far Candidate sentences that are yet to be processed and Question The reward used by REINFORCE is the ROUGE value of the summary generated by the system Since BIBREF2 observed a difference between the ROUGE values of the Python implementation of ROUGE and the original Perl version partly because the Python implementation does not include ROUGESU4 we compare the performance of our system when trained with each of them Table TABREF35 summarises some of our experiments We ran the version trained on Python ROUGE once and the version trained on Perl twice The two Perl runs have different results and one of them clearly outperforms the Python run However given the differences of results between the two Perl runs we advice to rerun the experiments multiple times and obtain the mean and standard deviation of the runs before concluding whether there is any statistical difference between the results But it seems that there may be an improvement of the final evaluation results when training on the Perl ROUGE values presumably because the final evaluation results are measured using the Perl implementation of ROUGE We have also tested the use of word embeddings instead of tfidf as input features to the policy model while keeping the same neural architecture for the policy one hidden layer using the same number of hidden nodes In particular we use the mean of word embeddings using 100 and 200 dimensions These word embeddings were pretrained using word2vec on PubMed documents provided by the organisers of BioASQ as we did for the architectures described in previous sections The results not shown in the paper indicated no major improvement and reruns of the experiments showed different results on different runs Consequently our submission to BioASQ included the original system using tfidf as input features in all batches but batch 2 as described in Section SECREF7 As mentioned in Section SECREF5 there appears to be a large discrepancy between ROUGE Recall and the human evaluations This section describes a correlation analysis between human and ROUGE evaluations using the runs of all participants to all previous BioASQ challenges that included human evaluations Phase B ideal answers The human evaluation results were scraped from the BioASQ Results page and the ROUGE results were kindly provided by the organisers We compute the correlation of each of the ROUGE metrics recall precision F1 for ROUGE2 and ROUGESU4 against the average of the human scores The correlation metrics are Pearson Kendall and a revised Kendall correlation explained below The Pearson correlation between two variables is computed as the covariance of the two variables divided by the product of their standard deviations This correlation is a good indication of a linear relation between the two variables but may not be very effective when there is nonlinear correlation The Spearman rank correlation and the Kendall rank correlation are two of the most popular among metrics that aim to detect nonlinear correlations The Spearman rank correlation between two variables can be computed as the Pearson correlation between the rank values of the two variables whereas the Kendall rank correlation measures the ordinal association between the two variables using Equation DISPLAYFORM36 It is useful to account for the fact that the results are from 28 independent sets 3 batches in BioASQ 1 and 5 batches each year between BioASQ 2 and BioASQ 6 We therefore also compute a revised Kendall rank correlation measure that only considers pairs of variable values within the same set The revised metric is computed using Equation DISPLAYFORM37 where S is the list of different sets Table TABREF38 shows the results of all correlation metrics Overall ROUGE2 and ROUGESU4 give similar correlation values but ROUGESU4 is marginally better Among precision recall and F1 both precision and F1 are similar but precision gives a better correlation Recall shows poor correlation and virtually no correlation when using the revised Kendall measure For reporting the evaluation of results it will be therefore more useful to use precision or F1 However given the small difference between precision and F1 and given that precision may favour short summaries when used as a function to optimise in a machine learning setting eg using reinforcement learning it may be best to use F1 as the metric to optimise Fig FIGREF40 shows the scatterplots of ROUGESU4 recall precision and F1 with respect to the average human evaluation We observe that the relation between ROUGE and the human evaluations is not linear and that Precision and F1 have a clear correlation Table TABREF41 shows the results and details of the runs submitted to BioASQ The table uses ROUGESU4 Recall since this is the metric available at the time of writing this paper However note that as explained in Section SECREF6 these results might differ from the final human evaluation results Therefore we do not comment on the results other than observing that the first n baseline produces the same results as the neural regressor As mentioned in Section SECREF3 the labels used for the classification experiments are the 5 sentences with highest ROUGESU4 F1 score Macquarie Universitys participation in BioASQ 7 focused on the task of generating the ideal answers The runs use querybased extractive techniques and we experiment with classification regression and reinforcement learning approaches At the time of writing there were no human evaluation results and based on ROUGEF1 scores under crossvalidation on the training data we observed that classification approaches outperform regression approaches We experimented with several approaches to label the individual sentences for the classifier and observed that the optimal labelling policy for this task differed from prior work We also observed poor correlation between ROUGERecall and human evaluation metrics and suggest to use alternative automatic evaluation metrics with better correlation such as ROUGEPrecision or ROUGEF1 Given the nature of precisionbased metrics which could bias the system towards returning short summaries ROUGEF1 is probably more appropriate when using at development time for example for the reward function used by a reinforcement learning system Reinforcement learning gives promising results especially in human evaluations made on the runs submitted to BioASQ 6b This year we introduced very small changes to the runs using reinforcement learning and will aim to explore more complex reinforcement learning strategies and more complex neural models in the policy and value estimators </s>
<s>[INST] Marrying Universal Dependencies and Universal Morphology [/INST] The two largest standardized crosslingual datasets for morphological annotation are provided by the Universal Dependencies BIBREF1 and Universal Morphology BIBREF2 BIBREF3 projects Each projects data are annotated according to its own crosslingual schema prescribing how features like gender or case should be marked The schemata capture largely similar information so one may want to leverage both UDs tokenlevel treebanks and UniMorphs typelevel lookup tables and unify the two resources This would permit a leveraging of both the tokenlevel UD treebanks and the typelevel UniMorph tables of paradigms Unfortunately neither resource perfectly realizes its schema On a datasetbydataset basis they incorporate annotator errors omissions and human decisions when the schemata are underspecified one such example is in figdisagreement A datasetbydataset problem demands a datasetbydataset solution our task is not to translate a schema but to translate a resource Starting from the idealized schema we create a rulebased tool for converting UDschema annotations to UniMorph annotations incorporating languagespecific postedits that both correct infelicities and also increase harmony between the datasets themselves rather than the schemata We apply this conversion to the 31 languages with both UD and UniMorph data and we report our methods recall showing an improvement over the strategy which just maps corresponding schematic features to each other Further we show similar downstream performance for each annotation scheme in the task of morphological tagging This tool enables a synergistic use of UniMorph and Universal Dependencies as well as teasing out the annotation discrepancies within and across projects When one dataset disobeys its schema or disagrees with a related language the flaws may not be noticed except by such a methodological dive into the resources When the maintainers of the resources ameliorate these flaws the resources move closer to the goal of a universal crosslingual inventory of features for morphological annotation The contributions of this work are Morphological inflection is the act of altering the base form of a word the lemma represented in fixedwidth type to encode morphosyntactic features As an example from English prove takes on the form proved to indicate that the action occurred in the past We will represent all surface forms in quotation marks The process occurs in the majority of the worlds widelyspoken languages typically through meaningful affixes The breadth of forms created by inflection creates a challenge of data sparsity for natural language processing The likelihood of observing a particular word form diminishes A classic result in psycholinguistics BIBREF4 shows that inflectional morphology is a fully productive process Indeed it cannot be that humans simply have the equivalent of a lookup table where they store the inflected forms for retrieval as the syntactic context requires Instead there needs to be a mental process that can generate properly inflected words on demand BIBREF4 showed this insightfully through the wugtest an experiment where she forced participants to correctly inflect outofvocabulary lemmata such as the novel noun wug Certain features of a word do not vary depending on its context In German or Spanish where nouns are gendered the word for onion will always be grammatically feminine Thus to prepare for later discussion we divide the morphological features of a word into two categories the modifiable inflectional features and the fixed lexical features A part of speech POS is a coarse syntactic category like verb that begets a words particular menu of lexical and inflectional features In English verbs express no gender and adjectives do not reflect person or number The part of speech dictates a set of inflectional slots to be filled by the surface forms Completing these slots for a given lemma and part of speech gives a paradigm a mapping from slots to surface forms Regular English verbs have five slots in their paradigm BIBREF5 which we illustrate for the verb prove using simple labels for the forms in tabptb A morphosyntactic schema prescribes how language can be annotatedgiving stricter categories than our simple labels for proveand can vary in the level of detail provided Part of speech tags are an example of a very coarse schema ignoring details of person gender and number A slightly finergrained schema for English is the Penn Treebank tagset BIBREF6 which includes signals for English morphology For instance its VBZ tag pertains to the specially inflected 3rdperson singular presenttense verb form eg proves in tabptb If the tag in a schema is detailed enough that it exactly specifies a slot in a paradigm it is called a morphosyntactic description MSD These descriptions require varying amounts of detail While the English verbal paradigm is small enough to fit on a page the verbal paradigm of the Northeast Caucasian language Archi can have over 1500000 slots BIBREF7 Unlike the Penn Treebank tags the UD and UniMorph schemata are crosslingual and include a fuller lexicon of attributevalue pairs such as Person 1 Each was built according to a different set of principles UDs schema is constructed bottomup adapting to include new features when theyre identified in languages UniMorph conversely is topdown A crosslingual survey of the literature of morphological phenomena guided its design UniMorph aims to be linguistically complete containing all known morphosyntactic attributes Both schemata share one longterm goal a total inventory for annotating the possible morphosyntactic features of a word The Universal Dependencies morphological schema comprises part of speech and 23 additional attributes also called features in UD annotating meaning or syntax as well as languagespecific attributes In order to ensure consistent annotation attributes are included into the general UD schema if they occur in several corpora Languagespecific attributes are used when only one corpus annotates for a specific feature The UD schema seeks to balance languagespecific and crosslingual concerns It annotates for both inflectional features such as case and lexical features such as gender Additionally the UD schema annotates for features which can be interpreted as derivational in some languages For example the Czech UD guidance uses a Coll value for the Number feature to denote mass nouns for example lidstvo humankind from the root lid people UD represents a confederation of datasets BIBREF8 annotated with dependency relationships which are not the focus of this work and morphosyntactic descriptions Each dataset is an annotated treebank making it a resource of tokenlevel annotations The schema is guided by these treebanks with feature names chosen for relevance to native speakers In secunimorph we will contrast this with UniMorphs treatment of morphosyntactic categories The UD datasets have been used in the CoNLL shared tasks BIBREF9 In the Universal Morphological Feature Schema BIBREF10 there are at least 212 values spread across 23 attributes It identifies some attributes that UD excludes like information structure and deixis as well as providing more values for certain attributes like 23 different noun classes endemic to Bantu languages As it is a schema for marking morphology its part of speech attribute does not have POS values for punctuation symbols or miscellany Punct Sym and X in Universal Dependencies Like the UD schema the decomposition of a word into its lemma and MSD is directly comparable across languages Its features are informed by a distinction between universal categories which are widespread and psychologically real to speakers and comparative concepts only used by linguistic typologists to compare languages BIBREF11 Additionally it strives for identity of meaning across languages not simply similarity of terminology As a prime example it does not regularly label a dative case for nouns for reasons explained in depth by BIBREF11 The UniMorph resources for a language contain complete paradigms extracted from Wiktionary BIBREF12 BIBREF13 Word types are annotated to form a database mapping a lemmatag pair to a surface form The schema is explained in detail in BIBREF10 It has been used in the SIGMORPHON shared task BIBREF14 and the CoNLLSIGMORPHON shared tasks BIBREF15 BIBREF16 Several components of the UniMorph schema have been adopted by UD While the two schemata annotate different features their annotations often look largely similar Consider the attested annotation of the Spanish word mandaba Ihesheit commanded tabannotations shows that these annotations share many attributes Some conversions are straightforward VERB to V MoodInd to IND NumberSing to SG and Person3 to 3 One might also suggest mapping TenseImp to IPFV though this crosses semantic categories IPFV represents the imperfective aspect whereas TenseImp comes from imperfect the English name often given to Spanishs pasado continuo form The imperfect is a verb form which combines both past tense and imperfective aspect UniMorph chooses to split this into the atoms PST and IPFV while UD unifies them according to the familiar name of the tense Prima facie the alignment task may seem trivial But weve yet to explore the humans in the loop This conversion is a hard problem because were operating on idealized schemata Were actually annotating human decisionsand human mistakes If both schemata were perfectly applied their overlapping attributes could be mapped to each other simply in a crosslingual and totally general way Unfortunately the resources are imperfect realizations of their schemata The crosslingual crossresource and withinresource problems that well note mean that we need a tailormade solution for each language Showcasing their schemata the Universal Dependencies and UniMorph projects each present large annotated datasets UDs v21 release BIBREF1 has 102 treebanks in 60 languages The large resource constructed by independent parties evinces problems in the goal of a universal inventory of annotations Annotators may choose to omit certain values like the coerced gender of refrescante in figdisagreement and they may disagree on how a linguistic concept is encoded See eg BIBREF11 s BIBREF11 description of the dative case Additionally many of the treebanks were created by fully or semiautomatic conversion from treebanks with less comprehensive annotation schemata than UD BIBREF0 For instance the Spanish word vas you go is incorrectly labeled Gender FemNumber Pl because it ends in a character sequence which is common among feminine plural nouns Nevertheless the part of speech field for vas is correct UniMorphs development is more centralized and pipelined Inflectional paradigms are scraped from Wiktionary annotators map positions in the scraped data to MSDs and the mapping is automatically applied to all of the scraped paradigms Because annotators handle languages they are familiar with or related ones realization of the schema is also done on a languagebylanguage basis Further the scraping process does not capture lexical aspects that are not inflected like noun gender in many languages The schema permits inclusion of these details their absence is an artifact of the data collection process Finally UniMorph records only exist for nouns verbs and adjectives though the schema is broader than these categories For these reasons we treat the corpora as imperfect realizations of the schemata Moreover we contend that ambiguity in the schemata leave the door open to allow for such imperfections With no strict guidance its natural that annotators would take different paths Nevertheless modulo annotator disagreement we assume that within a particular corpus one word form will always be consistently annotated Three categories of annotation difficulty are missing values languagespecific attributes and multiword expressions In our work the goal is not simply to translate one schema into the other but to translate one resource the imperfect manifestation of the schema to match the other The differences between the schemata and discrepancies in annotation mean that the transformation of annotations from one schema to the other is not straightforward Two naive options for the conversion are a lookup table of MSDs and a lookup table of the individual attributevalue pairs which comprise the MSDs The former is untenable the table of all UD feature combinations including null features excluding languagespecific attributes would have 2445e17 entries Of course most combinations wont exist but this gives a sense of the tables scale Also it doesnt leverage the factorial nature of the annotations constructing the table would require a massive duplication of effort On the other hand attributevalue lookup lacks the flexibility to show how a pair of values interacts Neither approach would handle language and annotatorspecific tendencies in the corpora Our approach to converting UD MSDs to UniMorph MSDs begins with the attributevalue lookup then amends it on a languagespecific basis Alterations informed by the MSD and the word form like insertion substitution and deletion increase the number of agreeing annotations They are critical for work that examines the MSD monolithically instead of featurebyfeature BIBREF25 BIBREF26 Without exact matches converting the individual tags becomes hollow Beginning our process we relied on documentation of the two schemata to create our initial languageagnostic mapping of individual values This mapping has 140 pairs in it Because the mapping was derived purely from the schemata it is a useful approximation of how well the schemata match up We note however that the mapping does not handle idiosyncrasies like the many uses of dative or features which are represented in UniMorph by argument templates possession and ergativeabsolutive argument marking The initial step of our conversion is using this mapping to populate a proposed UniMorph MSD As shown in secresults the initial proposal is often frustratingly deficient Thus we introduce the postedits To concoct these we looked into UniMorph corpora for these languages compared these to the conversion outputs and then sought to bring the conversion outputs closer to the annotations in the actual UniMorph corpora When a form and its lemma existed in both corpora we could directly inspect how the annotations differed Our process of iteratively refining the conversion implies a table which exactly maps any combination of UD MSD and its related values lemma form etc to a UniMorph MSD though we do not store the table explicitly Some conversion rules weve created must be applied before or after others These sequential dependencies provide conciseness Our postediting procedure operates on the initial MSD hypothesis as follows We evaluate our tool on two tasks To be clear our scope is limited to the schema conversion Future work will explore NLP tasks that exploit both the created tokenlevel UniMorph data and the existing typelevel UniMorph data We transform all UD data to the UniMorph We compare the simple lookupbased transformation to the one with linguistically informed postedits on all languages with both UD and UniMorph data We then evaluate the recall of MSDs without partial credit Because the UniMorph tables only possess annotations for verbs nouns adjectives or some combination we can only examine performance for these parts of speech We consider two words to be a match if their form and lemma are present in both resources Syncretism allows a single surface form to realize multiple MSDs Spanish mandaba can be first or thirdperson so we define success as the computed MSD matching any of the words UniMorph MSDs This gives rise to an equation for recall of the wordlemma pairs found in both resources how many of their UniMorphconverted MSDs are present in the UniMorph tables Our problem here is not a learning problem so the question is illposed There is no training set and the two resources for a given language make up a test set The quality of our modelthe conversion toolcomes from how well we encode prior knowledge about the relationship between the UD and UniMorph corpora If the UniMorphconverted treebanks perform differently on downstream tasks then they convey different information This signals a failure of the conversion process As a downstream task we choose morphological tagging a critical step to leveraging morphological information on new text We evaluate taggers trained on the transformed UD data choosing eight languages randomly from the intersection of UD and UniMorph resources We report the macroaveraged F1 score of attributevalue pairs on a heldout test set with official trainvalidationtest splits provided in the UD treebanks As a reference point we also report tagging accuracy on those languages untransformed data We use the stateoftheart morphological tagger of BIBREF0 It is a factored conditional random field with potentials for each attribute attribute pair and attribute transition The potentials are computed by neural networks predicting the values of each attribute jointly but not monolithically Inference with the potentials is performed approximately by loopy belief propagation We use the authors hyperparameters We note a minor implementation detail for the sake of reproducibility The tagger exploits explicit guidance about the attribute each value pertains to The UniMorph schemas values are globally unique but their attributes are not explicit For example the UniMorph Masc denotes a masculine gender We amend the code of BIBREF0 to incorporate attribute identifiers for each UniMorph value We present the intrinsic tasks recall scores in tabrecall Bear in mind that due to annotation errors in the original corpora like the vas example from secresources the optimal score is not always 100 Some shortcomings of recall come from irremediable annotation discrepancies Largely we are hamstrung by differences in choice of attributes to annotate When one resource marks gender and the other marks case we cant infer the gender of the word purely from its surface form The resources themselves would need updating to encode the relevant morphosyntactic information Some languages had a very low number of overlapping forms and no tag matches or nearmatches between them Arabic Hindi Lithuanian Persian and Russian A full list of observed irremediable discrepancies is presented alongside the codebase There are three other transformations for which we note no improvement here Because of the problem in Basque argument encoding in the UniMorph datasetwhich only contains verbswe note no improvement in recall on Basque Irish also does not improve UD marks gender on nouns while UniMorph marks case Adjectives in UD are also underspecified The verbs though are already correct with the simple mapping Finally with Dutch the UD annotations are impoverished compared to the UniMorph annotations and missing attributes cannot be inferred without external knowledge For the extrinsic task the performance is reasonably similar whether UniMorph or UD see tabtagging A large fluctuation would suggest that the two annotations encode distinct information On the contrary the similarities suggest that the UniMorphmapped MSDs have similar content We recognize that in every case tagging F1 increasedalbeit by amounts as small as 016 points This is in part due to the information that is lost in the conversion UniMorphs schema does not indicate the type of pronoun demonstrative interrogative etc and when lexical information is not recorded in UniMorph we delete it from the MSD during transformation On the other hand UniMorphs atomic tags have more parts to guess but they are often related Eg Ipfv always entails Pst in Spanish Altogether these forces seem to have little impact on tagging performance The goal of a tagsettotagset mapping of morphological annotations is shared by the Interset project BIBREF28 Interset decodes features in the source corpus to a tag interlingua then encodes that into target corpus features The idea of an interlingua is drawn from machine translation where a prevailing early mindset was to convert to a universal representation then encode that representations semantics in the target language Our approach by contrast is a direct flight from the source to the target Because UniMorph corpora are noisy the encoding from the interlingua would have to be rewritten for each target Further decoding the UD MSD into the interlingua cannot leverage external information like the lemma and form The creators of HamleDT sought to harmonize dependency annotations among treebanks similar to our goal of harmonizing across resources BIBREF29 The treebanks they sought to harmonize used multiple diverse annotation schemes which the authors unified under a single scheme BIBREF30 present mappings into a coarse universal part of speech for 22 languages Working with POS tags rather than morphological tags which have far more dimensions their space of options to harmonize is much smaller than ours Our extrinsic evaluation is most in line with the paradigm of BIBREF31 and similar work therein who compare syntactic parser performance on UD treebanks annotated with two styles of dependency representation Our problem differs though in that the dependency representations express different relationships while our two schemata vastly overlap As our conversion is lossy we do not appraise the learnability of representations as they did In addition to using the number of extra rules as a proxy for harmony between resources one could perform crosslingual projection of morphological tags BIBREF32 BIBREF33 Our approach succeeds even without parallel corpora We created a tool for annotating Universal Dependencies CoNLLU files with UniMorph annotations Our tool is ready to use offtheshelf today requires no training and is deterministic While underspecification necessitates a lossy and imperfect conversion ours is interpretable Patterns of mistakes can be identified and ameliorated The tool allows a bridge between resources annotated in the Universal Dependencies and Universal Morphology UniMorph schemata As the Universal Dependencies project provides a set of treebanks with tokenlevel annotation while the UniMorph project releases typelevel annotated tables the newfound compatibility opens up new experiments A prime example of exploiting token and typelevel data is BIBREF34 That work presents a partofspeech POS dictionary built from Wiktionary where the POS tagger is also constrained to options available in their typelevel POS dictionary improving performance Our transformation means that datasets are prepared for similar experiments with morphological tagging It would also be reasonable to incorporate this tool as a subroutine to UDPipe BIBREF35 and Udapi BIBREF36 We leave open the task of converting in the opposite direction turning UniMorph MSDs into Universal Dependencies MSDs Because our conversion rules are interpretable we identify shortcomings in both resources using each as validation for the other We were able to find specific instances of incorrectly applied UniMorph annotation as well as specific instances of crosslingual inconsistency in both resources These findings will harden both resources and better align them with their goal of universal crosslingual annotation We thank Hajime Senuma and John SylakGlassman for early comments in devising the starting languageindependent mapping from Universal Dependencies to UniMorph </s>
<s>[INST] Towards Multimodal Emotion Recognition in German Speech Events in Cars using Transfer Learning [/INST] Automatic emotion recognition is commonly understood as the task of assigning an emotion to a predefined instance for example an utterance as audio signal an image for instance with a depicted face or a textual unit eg a transcribed utterance a sentence or a Tweet The set of emotions is often following the original definition by Ekman Ekman1992 which includes anger fear disgust sadness joy and surprise or the extension by Plutchik Plutchik1980 who adds trust and anticipation Most work in emotion detection is limited to one modality Exceptions include Busso2004 and Sebe2005 who investigate multimodal approaches combining speech with facial information Emotion recognition in speech can utilize semantic features as well BIBREF0 Note that the term multimodal is also used beyond the combination of vision audio and text For example Soleymani2012 use it to refer to the combination of electroencephalogram pupillary response and gaze distance In this paper we deal with the specific situation of car environments as a testbed for multimodal emotion recognition This is an interesting environment since it is to some degree a controlled environment Dialogue partners are limited in movement the degrees of freedom for occurring events are limited and several sensors which are useful for emotion recognition are already integrated in this setting More specifically we focus on emotion recognition from speech events in a dialogue with a human partner and with an intelligent agent Also from the application point of view the domain is a relevant choice Past research has shown that emotional intelligence is beneficial for human computer interaction Properly processing emotions in interactions increases the engagement of users and can improve performance when a specific task is to be fulfilled BIBREF1 BIBREF2 BIBREF3 BIBREF4 This is mostly based on the aspect that machines communicating with humans appear to be more trustworthy when they show empathy and are perceived as being natural BIBREF3 BIBREF5 BIBREF4 Virtual agents play an increasingly important role in the automotive context and the speech modality is increasingly being used in cars due to its potential to limit distraction It has been shown that adapting the incar speech interaction system according to the drivers emotional state can help to enhance security performance as well as the overall driving experience BIBREF6 BIBREF7 With this paper we investigate how each of the three considered modalitites namely facial expressions utterances of a driver as an audio signal and transcribed text contributes to the task of emotion recognition in incar speech interactions We focus on the five emotions of joy insecurity annoyance relaxation and boredom since terms corresponding to socalled fundamental emotions like fear have been shown to be associated to too strong emotional states than being appropriate for the incar context BIBREF8 Our first contribution is the description of the experimental setup for our data collection Aiming to provoke specific emotions with situations which can occur in realworld driving scenarios and to induce speech interactions the study was conducted in a driving simulator Based on the collected data we provide baseline predictions with offtheshelf tools for face and speech emotion recognition and compare them to a neural networkbased approach for emotion recognition from text Our second contribution is the introduction of transfer learning to adapt models trained on established outofdomain corpora to our use case We work on German language therefore the transfer consists of a domain and a language transfer A common approach to encode emotions for facial expressions is the facial action coding system FACS BIBREF9 BIBREF10 BIBREF11 As the reliability and reproducability of findings with this method have been critically discussed BIBREF12 the trend has increasingly shifted to perform the recognition directly on images and videos especially with deep learning For instance jung2015joint developed a model which considers temporal geometry features and temporal appearance features from image sequences kim2016hierarchical propose an ensemble of convolutional neural networks which outperforms isolated networks In the automotive domain FACS is still popular Ma2017 use support vector machines to distinguish happy bothered confused and concentrated based on data from a natural driving environment They found that bothered and confused are difficult to distinguish while happy and concentrated are well identified Aiming to reduce computational cost Tews2011 apply a simple feature extraction using four dots in the face defining three facial areas They analyze the variance of the three facial areas for the recognition of happy anger and neutral Ihme2018 aim at detecting frustration in a simulator environment They induce the emotion with specific scenarios and a demanding secondary task and are able to associate specific face movements according to FACS Paschero2012 use OpenCV httpsopencvorg to detect the eyes and the mouth region and track facial movements They simulate different lightning conditions and apply a multilayer perceptron for the classification task of Ekmans set of fundamental emotions Overall we found that studies using facial features usually focus on continuous driver monitoring often in driveronly scenarios In contrast our work investigates the potential of emotion recognition during speech interactions Past research on emotion recognition from acoustics mainly concentrates on either feature selection or the development of appropriate classifiers rao2013emotion as well as ververidis2004automatic compare local and global features in support vector machines Next to such discriminative approaches hidden Markov models are wellstudied however there is no agreement on which featurebased classifier is most suitable BIBREF13 Similar to the facial expression modality recent efforts on applying deep learning have been increased for acoustic speech processing For instance lee2015high use a recurrent neural network and palaz2015analysis apply a convolutional neural network to the raw speech signal Neumann2017 as well as Trigeorgis2016 analyze the importance of features in the context of deep learningbased emotion recognition In the automotive sector Boril2011 approach the detection of negative emotional states within interactions between driver and codriver as well as in calls of the driver towards the automated spoken dialogue system Using realworld driving data they find that the combination of acoustic features and their respective Gaussian mixture model scores performs best Schuller2006 collects 2000 dialog turns directed towards an automotive user interface and investigate the classification of anger confusion and neutral They show that automatic feature generation and feature selection boost the performance of an SVMbased classifier Further they analyze the performance under systematically added noise and develop methods to mitigate negative effects For more details we refer the reader to the survey by Schuller2018 In this work we explore the straightforward application of domain independent software to an incar scenario without domainspecific adaptations Previous work on emotion analysis in natural language processing focuses either on resource creation or on emotion classification for a specific task and domain On the side of resource creation the early and influential work of Pennebaker2015 is a dictionary of words being associated with different psychologically relevant categories including a subset of emotions Another popular resource is the NRC dictionary by Mohammad2012b It contains more than 10000 words for a set of discrete emotion classes Other resources include WordNet Affect BIBREF14 which distinguishes particular word classes Further annotated corpora have been created for a set of different domains for instance fairy tales BIBREF15 Blogs BIBREF16 Twitter BIBREF17 BIBREF18 BIBREF19 BIBREF20 BIBREF21 Facebook BIBREF22 news headlines BIBREF23 dialogues BIBREF24 literature BIBREF25 or self reports on emotion events BIBREF26 see BIBREF27 for an overview To automatically assign emotions to textual units the application of dictionaries has been a popular approach and still is particularly in domains without annotated corpora Another approach to overcome the lack of huge amounts of annotated training data in a particular domain or for a specific topic is to exploit distant supervision use the signal of occurrences of emoticons or specific hashtags or words to automatically label the data This is sometimes referred to as selflabeling BIBREF21 BIBREF28 BIBREF29 BIBREF30 A variety of classification approaches have been tested including SNoW BIBREF15 support vector machines BIBREF16 maximum entropy classification long shortterm memory network and convolutional neural network models BIBREF18 More recently the state of the art is the use of transfer learning from noisy annotations to more specific predictions BIBREF29 Still it has been shown that transferring from one domain to another is challenging as the way emotions are expressed varies between areas BIBREF27 The approach by Felbo2017 is different to our work as they use a huge noisy data set for pretraining the model while we use small high quality data sets instead Recently the state of the art has also been pushed forward with a set of shared tasks in which the participants with top results mostly exploit deep learning methods for prediction based on pretrained structures like embeddings or language models BIBREF21 BIBREF31 BIBREF20 Our work follows this approach and builds up on embeddings with deep learning Furthermore we approach the application and adaption of textbased classifiers to the automotive domain with transfer learning The first contribution of this paper is the construction of the AMMER data set which we describe in the following We focus on the drivers interactions with both a virtual agent as well as a codriver To collect the data in a safe and controlled environment and to be able to consider a variety of predefined driving situations the study was conducted in a driving simulator The study environment consists of a fixedbase driving simulator running Viress VTD Virtual Test Drive v220 simulation software httpsvirescomvtdviresvirtualtestdrive The vehicle has an automatic transmission a steering wheel and gas and brake pedals We collect data from video speech and biosignals Empatica E4 to record heart rate electrodermal activity skin temperature not further used in this paper and questionnaires Two RGB cameras are fixed in the vehicle to capture the drivers face one at the sun shield above the drivers seat and one in the middle of the dashboard A microphone is placed on the center console One experimenter sits next to the driver the other behind the simulator The virtual agent accompanying the drive is realized as WizardofOz prototype which enables the experimenter to manually trigger prerecorded voice samples playing trough the incar speakers and to bring new content to the center screen Figure FIGREF4 shows the driving simulator The experimental setting is comparable to an everyday driving task Participants are told that the goal of the study is to evaluate and to improve an intelligent driving assistant To increase the probability of emotions to arise participants are instructed to reach the destination of the route as fast as possible while following traffic rules and speed limits They are informed that the time needed for the task would be compared to other participants The route comprises highways rural roads and city streets A navigation system with voice commands and information on the screen keeps the participants on the predefined track To trigger emotion changes in the participant we use the following events i a car on the right lane cutting off to the left lane when participants try to overtake followed by trucks blocking both lanes with a slow overtaking maneuver ii a skateboarder who appears unexpectedly on the street and iii participants are praised for reaching the destination unexpectedly quickly in comparison to previous participants Based on these events we trigger three interactions Table TABREF6 provides examples with the intelligent agent DriverAgent Interactions DA Pretending to be aware of the current situation e g to recognize unusual driving behavior such as strong braking the agent asks the driver to explain his subjective perception of these events in detail Additionally we trigger two more interactions with the intelligent agent at the beginning and at the end of the drive where participants are asked to describe their mood and thoughts regarding the upcoming drive This results in five interactions between the driver and the virtual agent Furthermore the codriver asks three different questions during sessions with light traffic and low cognitive demand DriverCoDriver Interactions DCo These questions are more general and nontrafficrelated and aim at triggering the participants memory and fantasy Participants are asked to describe their last vacation their dream house and their idea of the perfect job In sum there are eight interactions per participant 5 DA 3 DCo At the beginning of the study participants were welcomed and the upcoming study procedure was explained Subsequently participants signed a consent form and completed a questionnaire to provide demographic information After that the codriving experimenter started with the instruction in the simulator which was followed by a familiarization drive consisting of highway and city driving and covering different driving maneuvers such as tight corners lane changing and strong braking Subsequently participants started with the main driving task The drive had a duration of 20 minutes containing the eight previously mentioned speech interactions After the completion of the drive the actual goal of improving automatic emotional recognition was revealed and a standard emotional intelligence questionnaire namely the TEIQueSF BIBREF32 was handed to the participants Finally a retrospective interview was conducted in which participants were played recordings of their incar interactions and asked to give discrete annoyance insecurity joy relaxation boredom none following BIBREF8 was well as dimensional valence arousal dominance BIBREF33 on a 11point scale emotion ratings for the interactions and the according situations We only use the discrete class annotations in this paper Overall 36 participants aged 18 to 64 years mu 2889 sigma 1258 completed the experiment This leads to 288 interactions 180 between driver and the agent and 108 between driver and codriver The emotion selfratings from the participants yielded 90 utterances labeled with joy 26 with annoyance 49 with insecurity 9 with boredom 111 with relaxation and 3 with no emotion One example interaction per interaction type and emotion is shown in Table TABREF7 For further experiments we only use joy annoyanceanger and insecurityfear due to the small sample size for boredom and no emotion and under the assumption that relaxation brings little expressivity We preprocess the visual data by extracting the sequence of images for each interaction from the point where the agents or the codrivers question was completely uttered until the drivers response stops The average length is 163 seconds with the minimum at 22s and the maximum at 547s We apply an offtheshelf tool for emotion recognition the manufacturer cannot be disclosed due to licensing restrictions It delivers framebyframe scores in 0100 for discrete emotional states of joy anger and fear While joy corresponds directly to our annotation we map anger to our label annoyance and fear to our label insecurity The maximal average score across all frames constitutes the overall classification for the video sequence Frames where the software is not able to detect the face are ignored We extract the audio signal for the same sequence as described for facial expressions and apply an offtheshelf tool for emotion recognition The software delivers single classification scores for a set of 24 discrete emotions for the entire utterance We consider the outputs for the states of joy anger and fear mapping analogously to our classes as for facial expressions Lowconfidence predictions are interpreted as no emotion We accept the emotion with the highest score as the discrete prediction otherwise For the emotion recognition from text we manually transcribe all utterances of our AMMER study To exploit existing and available data sets which are larger than the AMMER data set we develop a transfer learning approach We use a neural network with an embedding layer frozen weights pretrained on Common Crawl and Wikipedia BIBREF36 a bidirectional LSTM BIBREF37 and two dense layers followed by a soft max output layer This setup is inspired by BIBREF38 We use a dropout rate of 03 in all layers and optimize with Adam BIBREF39 with a learning rate of 105 These parameters are the same for all further experiments We build on top of the Keras library with the TensorFlow backend We consider this setup our baseline model We train models on a variety of corpora namely the common format published by BIBREF27 of the FigureEight formally known as Crowdflower data set of social media the ISEAR data BIBREF40 selfreported emotional events and the Twitter Emotion Corpus TEC weakly annotated Tweets with anger disgust fear happy sadness and surprise Mohammad2012 From all corpora we use instances with labels fear anger or joy These corpora are English however we do predictions on German utterances Therefore each corpus is preprocessed to German with Google Translate We remove URLs user tags Username punctuation and hash signs The distributions of the data sets are shown in Table TABREF12 To adapt models trained on these data we apply transfer learning as follows The model is first trained until convergence on one outofdomain corpus only on classes fear joy anger for compatibility reasons Then the parameters of the biLSTM layer are frozen and the remaining layers are further trained on AMMER This procedure is illustrated in Figure FIGREF13 Table TABREF16 shows the confusion matrices for facial and audio emotion recognition on our complete AMMER data set and Table TABREF17 shows the results per class for each method including facial and audio data and micro and macro averages The classification from facial expressions yields a macroaveraged textF1 score of 33 across the three emotions joy insecurity and annoyance P031 R035 While the classification results for joy are promising R43 P57 the distinction of insecurity and annoyance from the other classes appears to be more challenging Regarding the audio signal we observe a macro textF1 score of 29 P42 R22 There is a bias towards negative emotions which results in a small number of detected joy predictions R4 Insecurity and annoyance are frequently confused The experimental setting for the evaluation of emotion recognition from text is as follows We evaluate the BiLSTM model in three different experiments 1 indomain 2 outofdomain and 3 transfer learning For all experiments we train on the classes angerannoyance fearinsecurity and joy Table TABREF19 shows all results for the comparison of these experimental settings We first set a baseline by validating our models on established corpora We train the baseline model on 60 of each data set listed in Table TABREF12 and evaluate that model with 40 of the data from the same domain results shown in the column InDomain in Table TABREF19 Excluding AMMER we achieve an average micro textF1 of 68 with best results of F173 on TEC The model trained on our AMMER corpus achieves an F1 score of 57 This is most probably due to the small size of this data set and the class bias towards joy which makes up more than half of the data set These results are mostly in line with Bostan2018 Now we analyze how well the models trained in Experiment 1 perform when applied to our data set The results are shown in column Simple in Table TABREF19 We observe a clear drop in performance with an average of F148 The best performing model is again the one trained on TEC en par with the one trained on the Figure8 data The model trained on ISEAR performs second best in Experiment 1 it performs worst in Experiment 2 To adapt models trained on previously existing data sets to our particular application the AMMER corpus we apply transfer learning Here we perform leaveoneout cross validation As pretrained models we use each model from Experiment 1 and further optimize with the training subset of each crossvalidation iteration of AMMER The results are shown in the column Transfer L in Table TABREF19 The confusion matrix is also depicted in Table TABREF16 With this procedure we achieve an average performance of F175 being better than the results from the indomain Experiment 1 The best performance of F176 is achieved with the model pretrained on each data set except for ISEAR All transfer learning models clearly outperform their simple outofdomain counterpart To ensure that this performance increase is not only due to the larger data set we compare these results to training the model without transfer on a corpus consisting of each corpus together with AMMER again in leaveoneout crossvalidation These results are depicted in column Joint C Thus both settings transfer learning and joint corpus have access to the same information The results show an increase in performance in contrast to not using AMMER for training however the transfer approach based on partial retraining the model shows a clear improvement for all models by 7pp for Figure8 10pp for EmoInt 8pp for TEC 13pp for ISEAR compared to the Joint setup We described the creation of the multimodal AMMER data with emotional speech interactions between a driver and both a virtual agent and a codriver We analyzed the modalities of facial expressions acoustics and transcribed utterances regarding their potential for emotion recognition during incar speech interactions We applied offtheshelf emotion recognition tools for facial expressions and acoustics For transcribed text we developed a neural networkbased classifier with transfer learning exploiting existing annotated corpora We find that analyzing transcribed utterances is most promising for classification of the three emotional states of joy annoyance and insecurity Our results for facial expressions indicate that there is potential for the classification of joy however the states of annoyance and insecurity are not well recognized Future work needs to investigate more sophisticated approaches to map frame predictions to sequence predictions Furthermore movements of the mouth region during speech interactions might negatively influence the classification from facial expressions Therefore the question remains how facial expressions can best contribute to multimodal detection in speech interactions Regarding the classification from the acoustic signal the application of offtheshelf classifiers without further adjustments seems to be challenging We find a strong bias towards negative emotional states for our experimental setting For instance the personalization of the recognition algorithm e g mean and standard deviation normalization could help to adapt the classification for specific speakers and thus to reduce this bias Further the acoustic environment in the vehicle interior has special properties and the recognition software might need further adaptations Our transfer learningbased text classifier shows considerably better results This is a substantial result in its own as only one previous method for transfer learning in emotion recognition has been proposed in which a sentimentemotion specific source for labels in pretraining has been used to the best of our knowledge BIBREF29 Other applications of transfer learning from general language models include BIBREF41 BIBREF42 Our approach is substantially different not being trained on a huge amount of noisy data but on smaller outofdomain sets of higher quality This result suggests that emotion classification systems which work across domains can be developed with reasonable effort For a productive application of emotion detection in the context of speech events we conclude that a deployed system might perform best with a speechtotext module followed by an analysis of the text Further in this work we did not explore an ensemble model or the interaction of different modalities Thus future work should investigate the fusion of multiple modalities in a single classifier We thank LauraAnaMaria Bostan for discussions and data set preparations This research has partially been funded by the German Research Council DFG project SEAT KL 286911 </s>
<s>[INST] Revisiting Low-Resource Neural Machine Translation: A Case Study [/INST] While neural machine translation NMT has achieved impressive performance in highresource data conditions becoming dominant in the field BIBREF0 BIBREF1 BIBREF2 recent research has argued that these models are highly datainefficient and underperform phrasebased statistical machine translation PBSMT or unsupervised methods in lowdata conditions BIBREF3 BIBREF4 In this paper we reassess the validity of these results arguing that they are the result of lack of system adaptation to lowresource settings Our main contributions are as follows Figure FIGREF4 reproduces a plot by BIBREF3 which shows that their NMT system only outperforms their PBSMT system when more than 100 million words approx 5 million sentences of parallel training data are available Results shown by BIBREF4 are similar showing that unsupervised NMT outperforms supervised systems if few parallel resources are available In both papers NMT systems are trained with hyperparameters that are typical for highresource settings and the authors did not tune hyperparameters or change network architectures to optimize NMT for lowresource conditions The bulk of research on lowresource NMT has focused on exploiting monolingual data or parallel data involving other language pairs Methods to improve NMT with monolingual data range from the integration of a separately trained language model BIBREF5 to the training of parts of the NMT model with additional objectives including a language modelling objective BIBREF5 BIBREF6 BIBREF7 an autoencoding objective BIBREF8 BIBREF9 or a roundtrip objective where the model is trained to predict monolingual targetside training data that has been backtranslated into the source language BIBREF6 BIBREF10 BIBREF11 As an extreme case models that rely exclusively on monolingual data have been shown to work BIBREF12 BIBREF13 BIBREF14 BIBREF4 Similarly parallel data from other language pairs can be used to pretrain the network or jointly learn representations BIBREF15 BIBREF16 BIBREF17 BIBREF18 BIBREF19 BIBREF20 BIBREF21 While semisupervised and unsupervised approaches have been shown to be very effective for some language pairs their effectiveness depends on the availability of large amounts of suitable auxiliary data and other conditions being met For example the effectiveness of unsupervised methods is impaired when languages are morphologically different or when training domains do not match BIBREF22 More broadly this line of research still accepts the premise that NMT models are datainefficient and require large amounts of auxiliary data to train In this work we want to revisit this point and will focus on techniques to make more efficient use of small amounts of parallel training data Lowresource NMT without auxiliary data has received less attention work in this direction includes BIBREF23 BIBREF24 We consider the hyperparameters used by BIBREF3 to be our baseline This baseline does not make use of various advances in NMT architectures and training tricks In contrast to the baseline we use a BiDeep RNN architecture BIBREF25 label smoothing BIBREF26 dropout BIBREF27 word dropout BIBREF28 layer normalization BIBREF29 and tied embeddings BIBREF30 Subword representations such as BPE BIBREF31 have become a popular choice to achieve openvocabulary translation BPE has one hyperparameter the number of merge operations which determines the size of the final vocabulary For highresource settings the effect of vocabulary size on translation quality is relatively small BIBREF32 report mixed results when comparing vocabularies of 30k and 90k subwords In lowresource settings large vocabularies result in lowfrequency subwords being represented as atomic units at training time and the ability to learn good highdimensional representations of these is doubtful BIBREF33 propose a minimum frequency threshold for subword units and splitting any less frequent subword into smaller units or characters We expect that such a threshold reduces the need to carefully tune the vocabulary size to the dataset leading to more aggressive segmentation on smaller datasets Due to long training times hyperparameters are hard to optimize by grid search and are often reused across experiments However best practices differ between highresource and lowresource settings While the trend in highresource settings is towards using larger and deeper models BIBREF24 use smaller and fewer layers for smaller datasets Previous work has argued for larger batch sizes in NMT BIBREF35 BIBREF36 but we find that using smaller batches is beneficial in lowresource settings More aggressive dropout including dropping whole words at random BIBREF37 is also likely to be more important We report results on a narrow hyperparameter search guided by previous work and our own intuition Finally we implement and test the lexical model by BIBREF24 which has been shown to be beneficial in lowdata conditions The core idea is to train a simple feedforward network the lexical model jointly with the original attentional NMT model The input of the lexical model at time step INLINEFORM0 is the weighted average of source embeddings INLINEFORM1 the attention weights INLINEFORM2 are shared with the main model After a feedforward layer with skip connection the lexical models output INLINEFORM3 is combined with the original models hidden state INLINEFORM4 before softmax computation INLINEFORM5 Our implementation adds dropout and layer normalization to the lexical model We use the TED data from the IWSLT 2014 German INLINEFORM0 English shared translation task BIBREF38 We use the same data cleanup and traindev split as BIBREF39 resulting in 159000 parallel sentences of training data and 7584 for development As a second language pair we evaluate our systems on a KoreanEnglish dataset with around 90000 parallel sentences of training data 1000 for development and 2000 for testing For both PBSMT and NMT we apply the same tokenization and truecasing using Moses scripts For NMT we also learn BPE subword segmentation with 30000 merge operations shared between German and English and independently for Korean INLINEFORM0 English To simulate different amounts of training resources we randomly subsample the IWSLT training corpus 5 times discarding half of the data at each step Truecaser and BPE segmentation are learned on the full training corpus as one of our experiments we set the frequency threshold for subword units to 10 in each subcorpus see SECREF7 Table TABREF14 shows statistics for each subcorpus including the subword vocabulary Translation outputs are detruecased detokenized and compared against the reference with cased BLEU using sacreBLEU BIBREF40 BIBREF41 Like BIBREF39 we report BLEU on the concatenated dev sets for IWSLT 2014 tst2010 tst2011 tst2012 dev2010 dev2012 We use Moses BIBREF42 to train a PBSMT system We use MGIZA BIBREF43 to train word alignments and lmplz BIBREF44 for a 5gram LM Feature weights are optimized on the dev set to maximize BLEU with batch MIRA BIBREF45 we perform multiple runs where indicated Unlike BIBREF3 we do not use extra data for the LM Both PBSMT and NMT can benefit from monolingual data so the availability of monolingual data is no longer an exclusive advantage of PBSMT see SECREF5 We train neural systems with Nematus BIBREF46 Our baseline mostly follows the settings in BIBREF3 we use adam BIBREF47 and perform early stopping based on dev set BLEU We express our batch size in number of tokens and set it to 4000 in the baseline comparable to a batch size of 80 sentences used in previous work We subsequently add the methods described in section SECREF3 namely the bideep RNN label smoothing dropout tied embeddings layer normalization changes to the BPE vocabulary size batch size model depth regularization parameters and learning rate Detailed hyperparameters are reported in Appendix SECREF7 Table TABREF18 shows the effect of adding different methods to the baseline NMT system on the ultralow data condition 100k words of training data and the full IWSLT 14 training corpus 32M words Our mainstream improvements add around 67 BLEU in both data conditions In the ultralow data condition reducing the BPE vocabulary size is very effective 49 BLEU Reducing the batch size to 1000 token results in a BLEU gain of 03 and the lexical model yields an additional 06 BLEU However aggressive word dropout 34 BLEU and tuning other hyperparameters 07 BLEU has a stronger effect than the lexical model and adding the lexical model 9 on top of the optimized configuration 8 does not improve performance Together the adaptations to the ultralow data setting yield 94 BLEU 72 INLINEFORM2 166 The model trained on full IWSLT data is less sensitive to our changes 319 INLINEFORM3 328 BLEU and optimal hyperparameters differ depending on the data condition Subsequently we still apply the hyperparameters that were optimized to the ultralow data condition 8 to other data conditions and Korean INLINEFORM4 English for simplicity For a comparison with PBSMT and across different data settings consider Figure FIGREF19 which shows the result of PBSMT our NMT baseline and our optimized NMT system Our NMT baseline still performs worse than the PBSMT system for 32M words of training data which is consistent with the results by BIBREF3 However our optimized NMT system shows strong improvements and outperforms the PBSMT system across all data settings Some sample translations are shown in Appendix SECREF8 For comparison to previous work we report lowercased and tokenized results on the full IWSLT 14 training set in Table TABREF20 Our results far outperform the RNNbased results reported by BIBREF48 and are on par with the best reported results on this dataset Table TABREF21 shows results for Korean INLINEFORM0 English using the same configurations 1 2 and 8 as for GermanEnglish Our results confirm that the techniques we apply are successful across datasets and result in stronger systems than previously reported on this dataset achieving 1037 BLEU as compared to 597 BLEU reported by guEtAl2018EMNLP1 Our results demonstrate that NMT is in fact a suitable choice in lowdata settings and can outperform PBSMT with far less parallel training data than previously claimed Recently the main trend in lowresource MT research has been the better exploitation of monolingual and multilingual resources Our results show that lowresource NMT is very sensitive to hyperparameters such as BPE vocabulary size word dropout and others and by following a set of best practices we can train competitive NMT systems without relying on auxiliary resources This has practical relevance for languages where large amounts of monolingual data or multilingual data involving related languages are not available Even though we focused on only using parallel data our results are also relevant for work on using auxiliary data to improve lowresource MT Supervised systems serve as an important baseline to judge the effectiveness of semisupervised or unsupervised approaches and the quality of supervised systems trained on little data can directly impact semisupervised workflows for instance for the backtranslation of monolingual data Rico Sennrich has received funding from the Swiss National Science Foundation in the project CoNTra grant number 105212169888 Biao Zhang acknowledges the support of the Baidu Scholarship Table TABREF23 lists hyperparameters used for the different experiments in the ablation study Table 2 Hyperparameters were kept constant across different data settings except for the validation interval and subword vocabulary size see Table 1 Table TABREF24 shows some sample translations that represent typical errors of our PBSMT and NMT systems trained with ultralow 100k words and low 32M words amounts of data For unknown words such as blutbefleckten bloodstained or Spaniern Spaniards Spanish PBSMT systems default to copying while NMT systems produce translations on a subwordlevel with varying success blueflect bleed spaniers Spanians NMT systems learn some syntactic disambiguation even with very little data for example the translation of das and die as relative pronouns that which who while PBSMT produces less grammatical translation On the flip side the ultra lowresource NMT system ignores some unknown words in favour of a moreorless fluent but semantically inadequate translation erobert conquered is translated into doing and richtig aufgezeichnet registered correctly recorded correctly into really the first thing </s>
<s>[INST] Facilitating on-line opinion dynamics by mining expressions of causation. The case of climate change debates on The Guardian [/INST] Over the past two decades the rise of social media and the digitization of news and discussion platforms have radically transformed how individuals and groups create process and share news and information As Alan Rusbridger formereditorinchief of the newspaper The Guardian has it these technologicallydriven shifts in the ways people communicate organize themselves and express their beliefs and opinions have empowered those that were never heard creating a a new form of politics and turning traditional news corporations inside out It is impossible to think of Donald Trump of Brexit of Bernie Sanders of Podemos of the growth of the far right in Europe of the spasms of hope and violent despair in the Middle East and North Africa without thinking also of the total inversion of how news is created shared and distributed Much of it is liberating and and inspiring Some of it is ugly and dark And something the centuriesold craft of journalism is in danger of being lost BIBREF0 Rusbridgers observation that the present mediaecology puts traditional notions of politics journalism trust and truth at stake is a widely shared one BIBREF1 BIBREF2 BIBREF3 As such it has sparked interdisciplinary investigations diagnoses and ideas for remedies across the economical sociopolitical and technological spectrum challenging our existing assumptions and epistemologies BIBREF4 BIBREF5 Among these lines of inquiry particular strands of research from the computational social sciences are addressing pressing questions of how emerging technologies and digital methods might be operationalized to regain a grip on the dynamics that govern the flow of online news and its associated multitudes of voices opinions and conflicts Could the information circulating on online social news platforms for instance be mined to better understand and analyze the problems facing our contemporary society Might such data mining and analysis help us to monitor the growing number of social conflicts and crises due to cultural differences and diverging worldviews And finally would such an approach potentially facilitate early detection of conflicts and even ways to resolve them before they turn violent Answering these questions requires further advances in the study of cultural conflict based on digital media data This includes the development of finegrained representations of cultural conflict based on theoreticallyinformed text analysis the integration of gametheoretical approaches to models of polarization and alignment as well as the construction of accessible tools and mediamonitoring observatories platforms that foster insight into the complexities of social behaviour and opinion dynamics through automated computational analyses of social media data Through an interdisciplinary approach the present article aims to make both a practical and theoretical contribution to these aspects of the study of opinion dynamics and conflict in new media environments The objective of the present article is to critically examine possibilities and limitations of machineguided exploration and potential facilitation of online opinion dynamics on the basis of an experimental data analytics pipeline or observatory for mining and analyzing climate changerelated user comments from the news website of The Guardian TheGuardiancom Combining insights from the social and political sciences with computational methods for the linguistic analysis of texts this observatory provides a series of spatial network representations of the opinion landscapes on climate change on the basis of causation frames expressed in news website comments This allows for the exploration of opinion spaces at different levels of detail and aggregation Technical and theoretical questions related to the proposed method and infrastructure for the exploration and facilitation of debates will be discussed in three sections The first section concerns notions of how to define what constitutes a belief or opinion and how these can be mined from texts To this end an approach based on the automated extraction of semantic frames expressing causation is proposed The observatory thus builds on the theoretical premise that expressions of causation such as global warming causes rises in sea levels can be revelatory for a person or groups underlying belief systems Through a further technical description of the observatorys dataanalytical components section two of the paper deals with matters of spatially modelling the output of the semantic frame extractor and how this might be achieved without sacrificing nuances of meaning The final section of the paper then discusses how insights gained from technologically observing opinion dynamics can inform conceptual modelling efforts and approaches to online opinion facilitation As such the paper brings into view and critically evaluates the fundamental conceptual leap from machineguided observation to debate facilitation and intervention Through the case examples from The Guardians website and the theoretical discussions explored in these sections the paper intends to make a twofold contribution to the fields of media studies opinion dynamics and computational social science Firstly the paper introduces and chains together a number of data analytics components for social media monitoring and facilitation that were developed in the context of the project name anonymized for review infrastructure project The project name anonymized for review infrastructure makes the components discussed in this paper available as open web services in order to foster reproducibility and further experimentation and development infrastructure reference URL anonymized for review Secondly and supplementing these technological and methodological gains the paper addresses a number of theoretical epistemological and ethical questions that are raised by experimental approaches to opinion exploration and facilitation This notably includes methodological questions on the preservation of meaning through text and data mining as well as the role of human interpretation responsibility and incentivisation in observing and potentially facilitating opinion dynamics In order to study online opinion dynamics and build the corresponding climate change opinion observatory discussed in this paper a corpus of climatechange related news articles and news website comments was analyzed Concretely articles from the climate change subsection from the news website of The Guardian dated from 2009 up to April 2019 were processed along with up to 200 comments and associated metadata for articles where commenting was enabled at the time of publication The choice for studying opinion dynamics using data from The Guardian is motivated by this news websites prominent position in the media landscape as well as its communicative setting which is geared towards user engagement Through this interaction with readers the news platform embodies many of the recent shifts that characterize our presentday media ecology TheGuardiancom is generally acknowledged to be one of the UKs leading online newspapers with 82 million unique visitors per month as of May 2013 BIBREF6 The website consists of a core news site as well as a range of subsections that allow for further classification and navigation of articles Articles related to climate change can for instance be accessed by navigating through the News section over the subsection environment to the subsubsection climate change BIBREF7 All articles on the website can be read free of charge as The Guardian relies on a business model that combines revenues from advertising voluntary donations and paid subscriptions Apart from offering highquality independent journalism on a range of topics a distinguishing characteristic of The Guardian is its penchant for reader involvement and engagement Adopting to the changing media landscape and appropriating business models that fit the transition from print to online news media the Guardian has transformed itself into a platform that enables forms of citizen journalism blogging and welcomes readers comments on news articles BIBREF0 In order for a reader to comment on articles it is required that a user account is made which provides a user with a unique user name and a user profile page with a stable URL According to the websites help pages providing users with an identity that is consistently recognized by the community fosters proper online community behaviour BIBREF8 Registered users can post comments on content that is open to commenting and these comments are moderated by a dedicated moderation team according to The Guardians community standards and participation guidelines BIBREF9 In support of digital methods and innovative approaches to journalism and data mining The Guardian has launched an open API application programming interface through which developers can access different types of content BIBREF10 It should be noted that at the moment of writing this article readers comments are not accessible through this API For the scientific and educational purposes of this paper comments were thus consulted using a dedicated scraper Taking into account this community and technologicallydriven orientation the communicative setting of The Guardian from which opinions are to be mined and the underlying belief system revealed is defined by articles participating commenters and comment spheres that is the actual comments aggregated by user individual article or collection of articles see Figure FIGREF4 In this setting articles and previous comments on those articles can be commented on by participating commenters each of which bring to the debate his or her own opinions or belief system What this belief system might consists of can be inferred on a number of levels with varying degrees of precision On the most general level a generic description of the profile of the average reader of The Guardian can be informative Such profiles have been compiled by market researchers with the purpose of informing advertisers about the demographic that might be reached through this news website and other products carrying The Guardians brand As of the writing of this article the audience The Guardian is presented to advertisers as a progressive audience Living in a world of unprecedented societal change with the public narratives around politics gender body image sexuality and diet all being challenged The Guardian is committed to reflecting the progressive agenda and reaching the crowd that uphold those values Its helpful that we reach over half of progressives in the UK BIBREF11 A second equally highlevel indicator of the beliefs that might be present on the platform are the links through which articles on climate change can be accessed An article on climate change might for instance be consulted through the environment section of the news website but also through the business section Assuming that business interests might potentially be at odds with environmental concerns it could be hypothesized that the particular comment sphere for that article consists of at least two potentially clashing frames of mind or belief systems However as will be expanded upon further in this article truly capturing opinion dynamics requires a more systemic and finegrained approach The present article therefore proposes a method for harvesting opinions from the actual comment texts The presupposition is thereby that comment spheres are marked by a diversity of potentially related opinions and beliefs Opinions might for instance be connected through the reply structure that marks the comment section of an article but this connection might also manifest itself on a semantic level that is the level of meaning or the actual contents of the comments To capture this multidimensional interconnected nature of the comment spheres it is proposed to represent comment spheres as networks where the nodes represent opinions and beliefs and edges the relationships between these beliefs see the spatial representation of beliefs infra The use of precision language tools to extract such beliefs and their mutual relationships as will be explored in the following sections can open up new pathways of model validation and creation In traditional experimental settings survey techniques and associated statistical models provide researchers with established methods to gauge and analyze the opinions of a population When studying opinion landscapes through online social media however harvesting beliefs from big textual data such as news website comments and developing or appropriating models for their analysis is a nontrivial task BIBREF12 BIBREF13 BIBREF14 In the present context two challenges related to datagathering and text mining need to be addressed 1 defining what constitutes an expression of an opinion or belief and 2 associating this definition with a pattern that might be extracted from texts Recent scholarship in the fields of natural language processing NLP and argumentation mining has yielded a range of instruments and methods for the automatic identification of argumentative claims in texts BIBREF15 BIBREF16 Adding to these instruments and methods the present article proposes an approach in which belief systems or opinions on climate change are accessed through expressions of causation The climate change debate is often characterized by expressions of causation that is expressions linking a certain cause with a certain effect Cultural or societal clashes on climate change might for instance concern diverging assessments of whether global warming is manmade or not BIBREF17 Based on such examples it can be stated that expressions of causation are closely associated with opinions or beliefs and that as such these expressions can be considered a valuable indicator for the range and diversity of the opinions and beliefs that constitute the climate change debate The observatory under discussion therefore focuses on the extraction and analysis of linguistic patterns called causation frames As will be further demonstrated in this section the benefit of this causationbased approach is that it offers a systemic approach to opinion dynamics that comprises different layers of meaning notably the cognitive or social meaningfulness of patterns on account of their being expressions of causation as well as further lexical and semantic information that might be used for analysis and comparison The study of expressions of causation as a method for accessing and assessing belief systems and opinions has been formalized and streamlined since the 1970s Pioneered by political scientist Robert Axelrod and others this causal mapping method also referred to as cognitive mapping was introduced as a means of reconstructing and evaluating administrative and political decisionmaking processes based on the principle that the notion of causation is vital to the process of evaluating alternatives Regardless of philosophical difficulties involved in the meaning of causation people do evaluate complex policy alternatives in terms of the consequences a particular choice would cause and ultimately of what the sum of these effects would be Indeed such causal analysis is built into our language and it would be very difficult for us to think completely in other terms even if we tried BIBREF18 Axelrods causal mapping method comprises a set of conventions to graphically represent networks of causes and effects the nodes in a network as well as the qualitative aspects of this relation the networks directed edges notably assertions of whether the causal linkage is positive or negative These causes and effects are to be extracted from relevant sources by means of a series of heuristics and an encoding scheme it should be noted that for this task Axelrod had human readers in mind The graphs resulting from these efforts provide a structural overview of the relations among causal assertions and thus beliefs The basic elements of the proposed system are quite simple The concepts a person uses are represented as points and the causal links between these concepts are represented as arrows between these points This gives a pictorial representation of the causal assertions of a person as a graph of points and arrows This kind of representation of assertions as a graph will be called a cognitive map The policy alternatives all of the various causes and effects the goals and the ultimate utility of the decision maker can all be thought of as concept variables and represented as points in the cognitive map The real power of this approach appears when a cognitive map is pictured in graph form it is then relatively easy to see how each of the concepts and causal relationships relate to each other and to see the overall structure of the whole set of portrayed assertions BIBREF18 In order to construct these cognitive maps based on textual information Margaret Tucker Wrightson provides a set of reading and coding rules for extracting cause concepts linkages relations and effect concepts from expressions in the English language The assertion Our present topic is the militarism of Germany which is maintaining a state of tension in the Baltic Area might for instance be encoded as follows the militarism of Germany cause concept a positive relationship maintaining a state of tension in the Baltic area effect concept BIBREF19 Emphasizing the role of human interpretation it is acknowledged that no strict set of rules can capture the entire spectrum of causal assertions The fact that the English language is as varied as those who use it makes the coders task complex and difficult No set of rules will completely solve the problems he or she might encounter These rules however provide the coder with guidelines which if conscientiously followed will result in outcomes meeting social scientific standards of comparative validity and reliability BIBREF19 To facilitate the task of encoders the causal mapping method has gone through various iterations since its original inception all the while preserving its original premises Recent software packages have for instance been devised to support the data encoding and drawing process BIBREF20 As such causal or cognitive mapping has become an established opinion and decision mining method within political science business and management and other domains It has notably proven to be a valuable method for the study of recent societal and cultural conflicts Thomas HomerDixon et al for instance rely on cognitiveaffective maps created from survey data to analyze interpretations of the housing crisis in Germany Israeli attitudes toward the Western Wall and moderate versus skeptical positions on climate change BIBREF21 Similarly Duncan Shaw et al venture to answer the question of Why did Brexit happen by building causal maps of nine televised debates that were broadcast during the four weeks leading up to the Brexit referendum BIBREF22 In order to appropriate the method of causal mapping to the study of online opinion dynamics it needs to expanded from applications at the scale of human readers and relatively small corpora of archival documents and survey answers to the realm of big textual data and larger quantities of information This attuning of cognitive mapping methods to the largescale processing of texts required for media monitoring necessarily involves a degree of automation as will be explored in the next section As outlined in the previous section causal mapping is based on the extraction of socalled cause concepts causal relations and effect concepts from texts The complexity of each of these these concepts can range from the relatively simple as illustrated by the easilyidentifiable cause and effect relation in the example of German militarism cited earlier to more complex assertions such as The development of international cooperation in all fields across the ideological frontiers will gradually remove the hostility and fear that poison international relations which contains two effect concepts viz the hostility that poisons international relations and the fear that poisons international relations As such this statement would have to be encoded as a double relationship BIBREF19 The coding guidelines in BIBREF19 further reflect that extracting cause and effect concepts from texts is an operation that works on both the syntactical and semantic levels of assertions This can be illustrated by means of the guidelines for analyzing the aforementioned causal assertion on German militarism 1 The first step is the realization of the relationship Does a subject affect an object 2 Having recognized that it does the isolation of the cause and effects concepts is the second step As the sentence structure indicates the militarism of Germany is the causal concept because it is the initiator of the action while the direct object clause a state of tension in the Baltic area constitutes that which is somehow influenced the effect concept BIBREF19 In the field of computational linguistics from which the present paper borrows part of its methods this procedure for extracting information related to causal assertions from texts can be considered an instance of an operation called semantic frame extraction BIBREF23 A semantic frame captures a coherent part of the meaning of a sentence in a structured way As documented in the FrameNet project BIBREF24 the Causation frame is defined as follows A Cause causes an Effect Alternatively an Actor a participant of a implicit Cause may stand in for the Cause The entity Affected by the Causation may stand in for the overall Effect situation or event BIBREF25 In a linguistic utterance such as a statement in a news website comment the Causation frame can be evoked by a series of lexical units such as cause bring on etc In the example If such a small earthquake CAUSES problems just imagine a big one the Causation frame is triggered by the verb causes which therefore is called the frame evoking element The Cause slot is filled by a small earthquake the Effect slot by problems BIBREF25 In order to automatically mine cause and effects concepts from the corpus of comments on The Guardian the present paper uses the Penelope semantic frame extractor a tool that exploits the fact that semantic frames can be expressed as formmeaning mappings called constructions Notably frames were extracted from Guardian comments by focusing on the following lexical units verbs prepositions and conjunctions listed in FrameNet as frame evoking elements of the Causation frame Causev Due toprep Becausec Because ofprep Give rise tov Lead tov or Result inv As illustrated by the following examples the strings output by the semantic frame extractor adhere closely to the original utterance preserving all of the the comments causation frames realworld noisiness The output of the semantic frame extractor as such is used as the input for the ensuing pipeline components in the climate change opinion observatory The aim of a further analysis of these frames is to find patterns in the beliefs and opinions they express As will be discussed in the following section which focuses on applications and cases maintaining semantic nuances in this further analytic process foregrounds the role of models and aggregation levels Based on the presupposition that relations between causation frames reveal beliefs the output of the semantic frame extractor creates various opportunities for exploring opinion landscapes and empirically validating conceptual models for opinion dynamics In general any alignment of conceptual models and realworld data is an exercise in compromising as the idealized abstract nature of models is likely to be at odds with the messiness of the actual data Finding such a compromise might for instance involve a reduction of the simplicity or elegance of the model or on the other hand an increased aggregation and thus reduced granularity of the data Addressing this challenge the current section reflects on questions of data modelling aggregation and meaning by exploring through case examples different spatial representations of opinion landscapes mined from the TheGuardiancoms comment sphere These spatial renditions will be understood as network visualizations in which nodes represent argumentative statements beliefs and edges the degree of similarity between these statements On the most general level then such a representation can consists of an overview of all the causes expressed in the corpus of climate changerelated Guardian comments This type of visualization provides a birdseye view of the entire opinion landscape as mined from the comment texts In turn such a general overview might elicit more finegrained microlevel investigations in which a particular cause is singled out and its more specific associated effects are mapped These macro and micro level overviews come with their own proper potential for theory building and evaluation as well as distinct requirements for the depth or detail of meaning that needs to be represented To get the most general sense of an opinion landscape one might for instance be more tolerant of abstract renditions of beliefs eg by reducing statements to their most frequently used terms but for more finegrained analysis one requires more context and nuance eg adhering as closely as possible to the original comment As follows from the above one of the most fundamental questions when building automated tools to observe opinion dynamics that potentially aim at advising means of debate facilitation concerns the level of meaning aggregation A clear argumentative or causal association between for instance climate change and catastrophic events such as floods or hurricanes may become detectable by automatic causal frame tracking at the scale of large collections of articles where this association might appear statistically more often but detection comes with great challenges when the aim is to classify certain sets of only a few statements in more free expression environments such as comment spheres In other words the problem of meaning aggregation is closely related to issues of scale and aggregation over utterances The more finegrained the semantic resolution is that is the more specific the cause or effect is that one is interested in the less probable it is to observe the same statement twice Moreover with every independent variable such as time different commenters or user groups etc less data on which finegrained opinion statements are to be detected is available In the present case of parsed comments from TheGuardiancom providing insights into the belief system of individual commenters even if all their statements are aggregated over time relies on a relatively small set of argumentative statements This relative sparseness is in part due to the fact that the scope of the semantic frame extractor is confined to the frame evoking elements listed earlier thus omitting more implicit assertions of causation ie expressions of causation that can only be derived from context and from reading between the lines Similarly as will be explored in the ensuing paragraphs matters of scale and aggregation determine the types of further linguistic analyses that can be performed on the output of the frame extractor Within the field of computational linguistics various techniques have been developed to represent the meaning of words as vectors that capture the contexts in which these words are typically used Such analyses might reveal patterns of statistical significance but it is also likely that in creating novel numerical representations of the original utterances the semantic structure of argumentatively linked beliefs is lost In sum developing opinion observatories and potential debate facilitators entails finding a tradeoff or in fact a middle way between macro and microlevel analyses On the one hand one needs to leverage automated analysis methods at the scale of larger collections to maximum advantage But one also needs to integrate opportunities to interactively zoom into specific aspects of interest and provide more finegrained information at these levels down to the actual statements This interplay between macro and microlevel analyses is explored in the case studies below The main purpose of the observatory under discussion is to provide insight into the belief structures that characterize the opinion landscape on climate change For reasons outlined above this raises questions of how to represent opinions and correspondingly determining which representation is most suited as the atomic unit of comparison between opinions In general terms the desired outcome of further processing of the output of the semantic frame extractor is a network representation in which similar cause or effect strings are displayed in close proximity to one another A highlevel description of the pipeline under discussion thus goes as follows In a first step it can be decided whether one wants to map cause statements or effect statements Next the selected statements are grouped per commenter ie a list is made of all cause statements or effect statements per commenter These statements are filtered in order to retain only nouns adjectives and verbs thereby also omitting frequently occurring verbs such as to be The remaining words are then lemmatized that is reduced to their dictionary forms This output is finally translated into a network representation whereby nodes represent aggregated statements and edges express the semantic relatedness between statements based on a set overlap whereby the number of shared lemmata are counted As illustrated by two spatial renditions that were created using this approach and visualized using the network analysis tool Gephi BIBREF26 the labels assigned to these nodes lemmata full statements or other can be appropriated to the scope of the analysis Suppose one wants to get a first idea about the scope and diversity of an opinion landscape without any preconceived notions of this landscapes structure or composition One way of doing this would be to map all of the causes that are mentioned in comments related to articles on climate change that is creating an overview of all the causes that have been retrieved by the frame extractor in a single representation Such a representation would not immediately provide the granularity to state what the beliefs or opinions in the debates actually are but rather it might inspire a sense of what those opinions might be about thus pointing towards potentially interesting phenomena that might warrant closer examination Figure FIGREF10 a highlevel overview of the opinion landscape reveals a number of areas to which opinions and beliefs might pertain The topleft clusters in the diagram for instance reveal opinions about the role of people and countries whereas on the righthand side we find a complementary cluster that might point to beliefs concerning the influence of high or increased CO2emissions In between there is a cluster on power and energy sources reflecting the energy debates association to both issues of human responsibility and CO2 emissions As such the overview can already inspire potentially at best some very general hypotheses about the types of opinions that figure in the climate change debate Based on the range of topics on which beliefs are expressed a microlevel analysis can be conducted to reveal what those beliefs are and for instance whether they align or contradict each other This can be achieved by singling out a cause of interest and mapping out its associated effects As revealed by the global overview of the climate change opinion landscape a portion of the debate concerns power and energy sources One topic with a particularly interesting role in this debate is nuclear power Figure FIGREF12 illustrates how a more detailed representation of opinions on this matter can be created by spatially representing all of the effects associated with causes containing the expression nuclear power Again similar beliefs in terms of words used in the effects are positioned closer to each other thus facilitating the detection of clusters Commenters on The Guardian for instance express concerns about the deaths or extinction that might be caused by this energy resource They also voice opinions on its cleanliness whether or not it might decrease pollution or be its own source of pollution and how it reduces CO2emissions in different countries Whereas the detailed opinion landscape on nuclear power is relatively limited in terms of the number of mined opinions other topics might reveal more elaborate belief systems This is for instance the case for the phenomenon of global warming As shown in Figure FIGREF13 opinions on global warming are clustered around the idea of increases notably in terms of evaporation drought heat waves intensity of cyclones and storms etc An adjacent cluster is related to extremes such as extreme summers and weather events but also extreme colds The observatory introduced in the preceding paragraphs provides preliminary insights into the range and scope of the beliefs that figure in climate change debates on TheGuardiancom The observatory as such takes a distinctly descriptive stance and aims to satisfy at least in part the information needs of researchers activists journalists and other stakeholders whose main concern is to document investigate and understand online opinion dynamics However in the current information sphere which is marked by polarization misinformation and a close entanglement with realworld conflicts taking a mere descriptive or neutral stance might not serve every stakeholders needs Indeed given the often skewed relations between power and information questions arise as to how media observations might in turn be translated into political social or economic action Knowledge about opinion dynamics might for instance inform interventions that remedy polarization or disarm conflict In other words the construction of social media observatories unavoidably lifts questions about the possibilities limitations and especially implications of the machineguided and humanincentivized facilitation of online discussions and debates Addressing these questions the present paragraph introduces and explores the concept of a debate facilitator that is a device that extends the capabilities of the previously discussed observatory to also promote more interesting and constructive discussions Concretely we will conceptualize a device that reveals how the personal opinion landscapes of commenters relate to each other in terms of overlap or lack thereof and we will discuss what steps might potentially be taken on the basis of such representation to balance the debate Geared towards possible interventions in the debate such a device may thus go well beyond the observatorys objectives of making opinion processes and conflicts more transparent which concomitantly raises a number of serious concerns that need to be acknowledged On rather fundamental ground tools that steer debates in one way or another may easily become manipulative and dangerous instruments in the hands of certain interest groups Various aspects of our daily lives are for instance already implicitly guided by recommender systems the purpose and impact of which can be rather opaque For this reason research efforts across disciplines are directed at scrutinizing and rendering such systems more transparent BIBREF28 Such scrutiny is particularly pressing in the context of interventions on online communication platforms which have already been argued to enforce affective communication styles that feed rather than resolve conflict The objectives behind any facilitation device should therefore be made maximally transparent and potential biases should be fully acknowledged at every level from data ingest to the dissemination of results BIBREF29 More concretely the endeavour of constructing opinion observatories and facilitators foregrounds matters of openness of data and tools security ensuring data quality and representative sampling accounting for evolving data legislation and policy building communities and trust and envisioning beneficial implications By documenting the development process for a potential facilitation device the present paper aims to contribute to these ongoing investigations and debates Furthermore every effort has been made to protect the identities of the commenters involved In the words of media and technology visionary Jaron Lanier developers and computational social scientists entering this space should remain fundamentally aware of the fact that digital information is really just people in disguise BIBREF30 With these reservations in mind the proposed approach can be situated among ongoing efforts that lead from debate observation to facilitation One such pathway for instance involves the construction of filters to detect hate speech misinformation and other forms of expression that might render debates toxic BIBREF31 BIBREF32 Combined with community outreach languagebased filtering and detection tools have proven to raise awareness among social media users about the nature and potential implications of their online contributions BIBREF33 Similarly advances can be expected from approaches that aim to extend the scope of analysis beyond descriptions of a present debate situation in order to model how a debate might evolve over time and how intentions of the participants could be included in such an analysis Progress in any of these areas hinges on a further integration of realworld data in the modelling process as well as a further sociotechnical and mediatheoretical investigation of how activity on social media platforms and technologies correlate to realworld conflicts The remainder of this section therefore ventures to explore how conceptual argument communication models for polarization and alignment BIBREF34 might be reconciled with realworld data and how such models might inform debate facilitation efforts As discussed in previous sections news websites like TheGuardiancom establish a communicative settings in which agents users commenters exchange arguments about different issues or topics For those seeking to establish a healthy debate it could thus be of interest to know how different users relate to each other in terms of their beliefs about a certain issue or topic in this case climate change Which beliefs are for instance shared by users and which ones are not In other words can we map patterns of alignment or polarization among users Figure FIGREF15 ventures to demonstrate how representations of opinion landscapes generated using the methods outlined above can be enriched with user information to answer such questions Specifically the graph represents the beliefs of two among the most active commenters in the corpus The opinions of each user are marked using a colour coding scheme red nodes represent the beliefs of the first user blue nodes represent the beliefs of the second user Nodes with a green colour represent beliefs that are shared by both users Taking into account again the factors of aggregation that were discussed in the previous section Figure FIGREF15 supports some preliminary observations about the relationship between the two users in terms of their beliefs Generally given the fact that the graph concerns the two most active commenters on the website it can be seen that the rendered opinion landscape is quite extensive It is also clear that the belief systems of both users are not unrelated as nodes of all colours can be found distributed throughout the graph This is especially the case for the righthand top cluster and righthand bottom cluster of the graph where green red and blue nodes are mixed Since both users are discussing on articles on climate change a degree of affinity between opinions or beliefs is to be expected Upon closer examination a number of disparities between the belief systems of the two commenters can be detected Considering the lefthand top cluster and center of the graph it becomes clear that exclusively the red commenter is using a selection of terms related to the economical and sociopolitical realm eg people american nation government and industry eg fuel industry car etc The blue commenter on the other hand exclusively engages in using a range of terms that could be deemed more technical and scientific in nature eg feedback property output trend variability etc From the graph it also follows that the blue commenter does not enter into the red commenters social segments of the graph as frequently as the red commenter enters the more scientificallyoriented clusters of the graph although in the latter cases the red commenter does not use the specific technical terminology of the blue commenter The cluster where both beliefs mingle the most and where overlap can be observed is the top right cluster This overlap is constituted by very general terms eg climate change and science In sum the graph reveals that the commenters beliefs are positioned most closely to each other on the most general aspects of the debate whereas there is less relatedness on the social and more technical aspects of the debate In this regard the depicted situation seemingly evokes currently ongoing debates about the role or responsibilities of the people or individuals versus that of experts when it comes to climate change BIBREF35 BIBREF36 BIBREF37 What forms of debate facilitation then could be based on these observations And what kind of collective effects can be expected As follows from the above beliefs expressed by the two commenters shown here which are selected based on their active participation rather than actual engagement or dialogue with one another are to some extent complementary as the blue commenter who displays a scientificallyoriented system of beliefs does not readily engage with the social topics discussed by the red commenter As such the overall opinion landscape of the climate change could potentially be enriched with novel perspectives if the blue commenter was invited to engage in a debate about such topics as industry and government Similarly one could explore the possibility of providing explanatory tools or additional references on occasions where the debate takes a more technical turn However argumentbased models of collective attitude formation BIBREF38 BIBREF34 also tell us to be cautious about such potential interventions Following the theory underlying these models different opinion groups prevailing during different periods of a debate will activate different argumentative associations Facilitating exchange between users with complementary arguments supporting similar opinions may enforce biased argument pools BIBREF39 and lead to increasing polarization at the collective level In the example considered here the two commenters agree on the general topic but the analysis suggests that they might have different opinions about the adequate direction of specific climate change action A more finegrained automatic detection of cognitive and evaluative associations between arguments and opinions is needed for a reliable use of models to predict what would come out of facilitating exchange between two specific users In this regard computational approaches to the linguistic analysis of texts such as semantic frame extraction offer productive opportunities for empirically modelling opinion dynamics Extraction of causation frames allows one to disentangle causeeffect relations between semantic units which provides a productive step towards mapping and measuring structures of cognitive associations These opportunities are to be explored by future work Ongoing transitions from a printbased media ecology to online news and discussion platforms have put traditional forms of news production and consumption at stake Many challenges related to how information is currently produced and consumed come to a head in news website comment sections which harbour the potential of providing new insights into how cultural conflicts emerge and evolve On the basis of an observatory for analyzing climate changerelated comments from TheGuardiancom this article has critically examined possibilities and limitations of the machineassisted exploration and possible facilitation of online opinion dynamics and debates Beyond technical and modelling pathways this examination brings into view broader methodological and epistemological aspects of the use of digital methods to capture and study the flow of online information and opinions Notably the proposed approaches lift questions of computational analysis and interpretation that can be tied to an overarching tension between distant and close reading BIBREF40 In other words monitoring online opinion dynamics means embracing the challenges and associated tradeoffs that come with investigating large quantities of information through computational textanalytical means but doing this in such a way that nuance and meaning are not lost in the process Establishing productive crossovers between the level of opinions mined at scale for instance through the lens of causation frames and the detailed closer looks at specific conversations interactions and contexts depends on a series of preliminaries One of these is the continued availability of highquality accessible data As the current online media ecology is recovering from recent privacyrelated scandals eg Cambridge Analytica such data for obvious reasons is not always easy to come by In the same legal and ethical vein reproducibility and transparency of models is crucial to the further development of analytical tools and methods As the experiments discussed in this paper have revealed a key factor in this undertaking are human faculties of interpretation Just like the encoding schemes introduced by Axelrod and others before the widespread use of computational methods presentday pipelines and tools foreground the role of human agents as the primary source of meaning attribution This project has received funding from the European Unions Horizon 2020 research and innovation programme under grant agreement No 732942 Opinion Dynamics and Cultural Conflict in European Spaces wwwOdycceuseu </s>
<s>[INST] "Hinglish"Language -- Modeling a Messy Code-Mixed Language [/INST] Hinglish is a linguistic blend of Hindi very widely spoken language in India and English an associate language of urban areas and is spoken by upwards of 350 million people in India While the name is based on the Hindi language it does not refer exclusively to Hindi but is used in India with English words blending with Punjabi Gujarati Marathi and Hindi Sometimes though rarely Hinglish is used to refer to Hindi written in English script and mixing with English words or phrases This makes analyzing the language very interesting Its rampant usage in social media like Twitter Facebook Online blogs and reviews has also led to its usage in delivering hate and abuses in similar platforms We aim to find such content in the social media focusing on the tweets Hypothetically if we can classify such tweets we might be able to detect them and isolate them for further analysis before it reaches public This will a great application of AI to the social cause and thus is motivating An example of a simple non offensive message written in Hinglish could be Why do you waste your time with redacted content Aapna ghar sambhalta nahiredacted content Chale dusro ko basane The second part of the above sentence is written in Hindi while the first part is in English Second part calls for an action to a person to bring order to hisher home before trying to settle others From the modeling perspective there are couple of challenges introduced by the language and the labelled dataset Generally Hinglish follows largely fuzzy set of rules which evolves and is dependent upon the users preference It doesnt have any formal definitions and thus the rules of usage are ambiguous Thus when used by different users the text produced may differ Overall the challenges posed by this problem are Geographical variation Depending upon the geography of origination the content may be be highly influenced by the underlying region Language and phonetics variation Based on a census in 2001 India has 122 major languages and 1599 other languages The use of Hindi and English in a code switched setting is highly influenced by these language No grammar rules Hinglish has no fixed set of grammar rules The rules are inspired from both Hindi and English and when mixed with slur and slang produce large variation Spelling variation There is no agreement on the spellings of the words which are mixed with English For example to express love a code mixed spelling specially when used social platforms might be pyaar pyar or pyr Dataset Based on some earlier work only available labelled dataset had 3189 rows of text messages of average length of 116 words and with a range of 1 1295 Prior work addresses this concern by using Transfer Learning on an architecture learnt on about 14500 messages with an accuracy of 8390 We addressed this concern using data augmentation techniques applied on text data Mathur et al in their paper for detecting offensive tweets proposed a Ternary TransCNN model where they train a model architecture comprising of 3 layers of Convolution 1D having filter sizes of 15 12 and 10 and kernel size of 3 followed by 2 dense fully connected layer of size 64 and 3 The first dense FC layer has ReLU activation while the last Dense layer had Softmax activation They were able to train this network on a parallel English dataset provided by Davidson et al The authors were able to achieve Accuracy of 839 Precision of 802 Recall of 698 The approach looked promising given that the dataset was merely 3189 sentences divided into three categories and thus we replicated the experiment but failed to replicate the results The results were poor than what the original authors achieved But most of the model hyperparameter choices where inspired from this work In another localized setting of Vietnamese language Nguyen et al in 2017 proposed a Hybrid multichannel CNN and LSTM model where they build feature maps for Vietnamese language using CNN to capture shorterm dependencies and LSTM to capture long term dependencies and concatenate both these feature sets to learn a unified set of features on the messages These concatenated feature vectors are then sent to a few fully connected layers They achieved an accuracy rate of 873 with this architecture We used dataset HEOT obtained from one of the past studies done by Mathur et al where they annotated a set of cleaned tweets obtained from twitter for the conversations happening in Indian subcontinent A labelled dataset for a corresponding english tweets were also obtained from a study conducted by Davidson et al This dataset was important to employ Transfer Learning to our task since the number of labeled dataset was very small Basic summary and examples of the data from the dataset are below The obtained data set had many challenges and thus a data preparation task was employed to clean the data and make it ready for the deep learning pipeline The challenges and processes that were applied are stated below Messy text messages The tweets had urls punctuations username mentions hastags emoticons numbers and lots of special characters These were all cleaned up in a preprocessing cycle to clean the data Stop words Stop words corpus obtained from NLTK was used to eliminate most unproductive words which provide little information about individual tweets Transliteration Followed by above two processes we translated Hinglish tweets into English words using a two phase process Transliteration In phase I we used translation APIs provided by Google translation services and exposed via a SDK to transliteration the Hinglish messages to English messages Translation After transliteration words that were specific to Hinglish were translated to English using an HinglishEnglish dictionary By doing this we converted the Hinglish message to and assortment of isolated words being presented in the message in a sequence that can also be represented using word to vector representation Data augmentation Given the data set was very small with a high degree of imbalance in the labelled messages for three different classes we employed a data augmentation technique to boost the learning of the deep network Following techniques from the paper by Jason et al was utilized in this setting that really helped during the training phaseThsi techniques wasnt used in previous studies The techniques were Synonym Replacement SRRandomly choose n words from the sentence that are not stop words Replace each of these words with one of its synonyms chosen at random Random Insertion RIFind a random synonym of a random word in the sentence that is not a stop word Insert that synonym into a random position in the sentence Do this n times Random Swap RSRandomly choose two words in the sentence and swap their positions Do this n times Random Deletion RDFor each word in the sentence randomly remove it with probability p Word Representation We used word embedding representations by Glove for creating word embedding layers and to obtain the word sequence vector representations of the processed tweets The pretrained embedding dimension were one of the hyperparamaters for model Further more we introduced another bit flag hyperparameter that determined if to freeze these learnt embedding Traintest split The labelled dataset that was available for this task was very limited in number of examples and thus as noted above few data augmentation techniques were applied to boost the learning of the network Before applying augmentation a traintest split of 7822 was done from the original cleansed data set Thus 700 tweetsmessages were held out for testing All model evaluation were done in on the test set that got generated by this process The results presented in this report are based on the performance of the model on the test set The training set of 2489 messages were however sent to an offline pipeline for augmenting the data The resulting training dataset was thus 7934 messages the final distribution of messages for training and test was thus below We tested the performance of various model architectures by running our experiment over 100 times on a CPU based compute which later as migrated to GPU based compute to overcome the slow learning progress Our universal metric for minimizing was the validation loss and we employed various operational techniques for optimizing on the learning process These processes and its implementation details will be discussed later but they were learning rate decay early stopping model checkpointing and reducing learning rate on plateau For the loss function we chose categorical cross entropy loss in finding the most optimal weightsparameters of the model Formally this loss function for the model is defined as below The double sum is over the number of observations and the categories respectively While the model probability is the probability that the observation i belongs to category c Among the model architectures we experimented with and without data augmentation were Fully Connected dense networks Model hyperparameters were inspired from the previous work done by Vo et al and Mathur et al This was also used as a baseline model but we did not get appreciable performance on such architecture due to FC networks not being able to capture local and long term dependencies Convolution based architectures Architecture and hyperparameter choices were chosen from the past study Deon on the subject We were able to boost the performance as compared to only FC based network but we noticed better performance from architectures that are suitable to sequences such as text messages or any timeseries data Sequence models We used SimpleRNN LSTM GRU Bidirectional LSTM model architecture to capture long term dependencies of the messages in determining the class the message or the tweet belonged to Based on all the experiments we conducted below model had best performance related to metrics Recall rate F1 score and Overall accuracy Choice of model parameters were in the above models were inspired from previous work done but then were tuned to the best performance of the Test dataset Following parameters were considered for tuning Learning rate Based on grid search the best performance was achieved when learning rate was set to 001 This value was arrived by a grid search on lr parameter Number of Bidirectional LSTM units A set of 32 64 128 hidden activation units were considered for tuning the model 128 was a choice made by Vo et al in modeling for Vietnamese language but with our experiments and with a small dataset to avoid overfitting to train dataset a smaller unit sizes were considered Embedding dimension 50 100 and 200 dimension word representation from Glove word embedding were considered and the best results were obtained with 100d representation consistent with choices made in the previous work Transfer learning on Embedding Another bit flag for training the embedding on the train data or freezing the embedding from Glove was used It was determined that set of pretrained weights from Glove was best when it was fine tuned with Hinglish data It provides evidence that a separate word or sentence level embedding when learnt for Hinglish text analysis will be very useful Number of dense FC layers Maximum length of the sequence to be considered The max length of tweetsmessage in the dataset was 1265 while average was 116 We determined that choosing 200 resulted in the best performance During our experimentation it was evident that this is a hard problem especially detecting the hate speech text in a code mixed language The best recall rate of 77 for hate speech was obtained by a Bidirectional LSTM with 32 units with a recurrent drop out rate of 02 Precision wise GRU type of RNN sequence model faired better than other kinds for hate speech detection On the other hand for detecting offensive and non offensive tweets fairly satisfactory results were obtained For offensive tweets 92 precision was and recall rate of 88 was obtained with GRU versus BiLSTM based models Comparatively Recall of 85 and precision of 76 was obtained by again GRU and BiLSTM based models as shown and marked in the results The results of the experiments are encouraging on detective offensive vs non offensive tweets and messages written in Hinglish in social media The utilization of data augmentation technique in this classification task was one of the vital contributions which led us to surpass results obtained by previous state of the art Hybrid CNNLSTM based models However the results of the model for predicting hateful tweets on the contrary brings forth some shortcomings of the model The biggest shortcoming on the model based on error analysis indicates less than generalized examples presented by the dataset We also note that the embedding learnt from the Hinglish data set may be lacking and require extensive training to have competent word representations of Hinglish text Given this learnings we identify that creating word embeddings on much larger Hinglish corpora may have significant results We also hypothesize that considering alternate methods than translation and transliteration may prove beneficial 1 Mathur Puneet and Sawhney Ramit and Ayyar Meghna and Shah Rajiv Did you offend me classification of offensive tweets in hinglish language Proceedings of the 2nd Workshop on Abusive Language Online ALW2 2 Mathur Puneet and Shah Rajiv and Sawhney Ramit and Mahata Debanjan Detecting offensive tweets in hindienglish codeswitched language Proceedings of the Sixth International Workshop on Natural Language Processing for Social Media 3 Vo QuanHoang and Nguyen HuyTien and Le Bac and Nguyen MinhLe Multichannel LSTMCNN model for Vietnamese sentiment analysis 2017 9th international conference on knowledge and systems engineering KSE 4 Hochreiter Sepp and Schmidhuber Jrgen Long shortterm memory Neural computation 1997 5 Sinha R Mahesh K and Thakur Anil Multichannel LSTMCNN model for Vietnamese sentiment analysis 2017 9th international conference on knowledge and systems engineering KSE 6 Pennington Jeffrey and Socher Richard and Manning Christopher Glove Global vectors for word representation Proceedings of the 2014 conference on empirical methods in natural language processing EMNLP 7 Zhang Lei and Wang Shuai and Liu Bing Deep learning for sentiment analysis A survey Wiley Interdisciplinary Reviews Data Mining and Knowledge Discovery 8 Caruana Rich and Lawrence Steve and Giles C Lee Overfitting in neural nets Backpropagation conjugate gradient and early stopping Advances in neural information processing systems 9 Beale Mark Hudson and Hagan Martin T and Demuth Howard B Neural network toolbox users guide The MathWorks Incs 10 Chollet Franois and others Keras The python deep learning library Astrophysics Source Code Library 11 Wei Jason and Zou Kai EDA Easy Data Augmentation Techniques for Boosting Performance on Text Classification Tasks Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing EMNLPIJCNLP </s>
<s>[INST] How Language-Neutral is Multilingual BERT? [/INST] Multilingual BERT mBERT BIBREF0 is gaining popularity as a contextual representation for various multilingual tasks such as dependency parsing BIBREF1 BIBREF2 crosslingual natural language inference XNLI or namedentity recognition NER BIBREF3 BIBREF4 BIBREF5 BIBREF3 present an exploratory paper showing that mBERT can be used crosslingually for zeroshot transfer in morphological and syntactic tasks at least for typologically similar languages They also study an interesting semantic task sentenceretrieval with promising initial results Their work leaves many open questions in terms of how good the crosslingual mBERT representation is for semantics motivating our work In this paper we directly assess the semantic crosslingual properties of mBERT To avoid methodological issues with zeroshot transfer possible language overfitting hyperparameter tuning we selected tasks that only involve a direct comparison of the representations crosslingual sentence retrieval word alignment and machine translation quality estimation MT QE Additionally we explore how the language is represented in the embeddings by training language identification classifiers and assessing how the representation similarity corresponds to phylogenetic language families Our results show that the mBERT representations even after languageagnostic finetuning are not very languageneutral However the identity of the language can be approximated as a constant shift in the representation space An even higher languageneutrality can still be achieved by a linear projection fitted on a small amount of parallel data Finally we present attempts to strengthen the languageneutral component via finetuning first for multilingual syntactic and morphological analysis second towards language identity removal via a adversarial classifier Since the publication of mBERT BIBREF0 many positive experimental results were published BIBREF2 reached impressive results in zeroshot dependency parsing However the representation used for the parser was a bilingual projection of the contextual embeddings based on wordalignment trained on parallel data BIBREF3 recently examined the crosslingual properties of mBERT on zeroshot NER and partofspeech POS tagging but the success of zeroshot transfer strongly depends on how typologically similar the languages are Similarly BIBREF4 trained good multilingual models for POS tagging NER and XNLI but struggled to achieve good results in the zeroshot setup BIBREF3 assessed mBERT on crosslingual sentence retrieval between three language pairs They observed that if they subtract the average difference between the embeddings from the target language representation the retrieval accuracy significantly increases We systematically study this idea in the later sections Many experiments show BIBREF4 BIBREF5 BIBREF1 that downstream task models can extract relevant features from the multilingual representations But these results do not directly show languageneutrality ie to what extent are similar phenomena are represented similarly across languages The models can obtain the taskspecific information based on the knowledge of the language which as we show later can be easily identified Our choice of evaluation tasks eliminates this risk by directly comparing the representations Limited success in zeroshot setups and the need for explicit bilingual projection in order to work well BIBREF3 BIBREF4 BIBREF6 also shows limited language neutrality of mBERT Following BIBREF3 we hypothesize that a sentence representation in mBERT is composed of a languagespecific component which identifies the language of the sentence and a languageneutral component which captures the meaning of the sentence in a languageindependent way We assume that the languagespecific component is similar across all sentences in the language We thus try to remove the languagespecific information from the representations by centering the representations of sentences in each language so that their average lies at the origin of the vector space We do this by estimating the language centroid as the mean of the mBERT representations for a set of sentences in that language and subtracting the language centroid from the contextual embeddings We then analyze the semantic properties of both the original and the centered representations using a range of probing tasks For all tasks we test all layers of the model For tasks utilizing a singlevector sentence representation we test both the vector corresponding to the cls token and meanpooled states We employ five probing tasks to evaluate the language neutrality of the representations With a representation that captures all phenomena in a languageneutral way it should be difficult to determine what language the sentence is written in Unlike other tasks language identification does require fitting a classifier We train a linear classifier on top of a sentence representation to try to classify the language of the sentence Experiments with POS tagging BIBREF3 suggest that similar languages tend to get similar representations on average We quantify that observation by measuring how languages tend to cluster by the language families using Vmeasure over hierarchical clustering of the language centeroid BIBREF7 For each sentence in a multiparallel corpus we compute the cosine distance of its representation with representations of all sentences on the parallel side of the corpus and select the sentence with the smallest distance Besides the plain and centered cls and meanpooled representations we evaluate explicit projection into the English space For each language we fit a linear regression projecting the representations into English representation space using a small set of parallel sentences While sentence retrieval could be done with keyword spotting computing bilingual alignment requires resolving detailed correspondence on the word level We find the word alignment as a minimum weighted edge cover of a bipartite graph The graph connects the tokens of the sentences in the two languages and edges between them are weighted with the cosine distance of the token representation Tokens that get split into multiple subwords are represented using the average of the embeddings of the subwords Note that this algorithm is invariant to representation centering which would only change the edge weights by a constant offset We evaluate the alignment using the F1 score over both sure and possible alignment links in a manually aligned gold standard MT QE assesses the quality of an MT system output without having access to a reference translation The standard evaluation metric is the correlation with the Humantargeted Translation Error Rate which is the number of edit operations a human translator would need to do to correct the system output This is a more challenging task than the two previous ones because it requires capturing more finegrained differences in meaning We evaluate how cosine distance of the representation of the source sentence and of the MT output reflects the translation quality In addition to plain and centered representations we also test trained bilingual projection and a fully supervised regression trained on training data We use a pretrained mBERT model that was made public with the BERT release The model dimension is 768 hidden layer dimension 3072 selfattention uses 12 heads the model has 12 layers It uses a vocabulary of 120k wordpieces that is shared for all languages To train the language identification classifier for each of the BERT languages we randomly selected 110k sentences of at least 20 characters from Wikipedia and keep 5k for validation and 5k for testing for each language The training data are also used for estimating the language centroids For parallel sentence retrieval we use a multiparallel corpus of test data from the WMT14 evaluation campaign BIBREF8 with 3000 sentences in Czech English French German Hindi and Russian The linear projection experiment uses the WMT14 development data We use manually annotated word alignment datasets to evaluate word alignment between English on one side and Czech 25k sent BIBREF9 Swedish 192 sent BIBREF10 German 508 sent French 447 sent BIBREF11 and Romanian 248 sent BIBREF12 on the other side We compare the results with FastAlign BIBREF13 that was provided with 1M additional parallel sentences from ParaCrawl BIBREF14 in addition to the test data For MT QE we use EnglishGerman data provided for the WMT19 QE Shared Task BIBREF15 consisting training and test data with source senteces their automatic translations and manually corrections Table TABREF7 shows that centering the sentence representations considerably decreases the accuracy of language identification especially in the case of meanpooled embeddings This indicates that the proposed centering procedure does indeed remove the languagespecific information to a great extent Figure FIGREF9 is a tSNE plot BIBREF16 of the language centroids showing that the similarity of the centroids tends to correspond to the similarity of the languages Table TABREF10 confirms that the hierarchical clustering of the language centroids mostly corresponds to the language families Results in Table TABREF12 reveal that the representation centering dramatically improves the retrieval accuracy showing that it makes the representations more languageneutral However an explicitly learned projection of the representations leads to a much greater improvement reaching a closetoperfect accuracy even though the projection was fitted on relatively small parallel data The accuracy is higher for meanpooled states than for the cls embedding and varies according to the layer of mBERT used see Figure FIGREF13 Table TABREF15 shows that wordalignment based on mBERT representations surpasses the outputs of the standard FastAlign tool even if it was provided large parallel corpus This suggests that wordlevel semantics are well captured by mBERT contextual embeddings For this task learning an explicit projection had a negligible effect on the performance Qualitative results of MT QE are tabulated in Table TABREF18 Unlike sentence retrieval QE is more sensitive to subtle differences between sentences Measuring the distance of the noncentered sentence vectors does not correlate with translation quality at all Centering or explicit projection only leads to a mild correlation much lower than a supervisedly trained regressionand even better performance is possible BIBREF15 The results show that the linear projection between the representations only captures a rough semantic correspondence which does not seem to be sufficient for QE where the most indicative feature appears to be sentence complexity We also considered model finetuning towards stronger language neutrality We evaluate two finetuned versions of mBERT UDify tuned for a multilingual dependency parser and lngfree tuned to jettison the languagespecific information from the representations The UDify model BIBREF1 uses mBERT to train a single model for dependency parsing and morphological analysis of 75 languages During the parser training mBERT is finetuned which improves the parser accuracy Results on zeroshot parsing suggest that the finetuning leads to more crosslingual representations with respect to morphology and syntax However our analyses show that finetuning mBERT for multilingual dependency parsing does not remove the language identity information from the representations and actually makes the representations less semantically crosslingual In this experiment we try to make the representations more languageneutral by removing the language identity from the model using an adversarial approach We continue training mBERT in a multitask learning setup with the masked LM objective with the same sampling procedure BIBREF0 jointly with adversarial language ID classifiers BIBREF17 For each layer we train one classifier for the cls token and one for the meanpooled hidden states with the gradient reversal layer BIBREF18 between mBERT and the classifier The results reveal that the adversarial removal of language information succeeds in dramatically decreasing the accuracy of the language identification classifier the effect is strongest in deeper layers for which the standard mBERT tend to perform better see Figure FIGREF22 However other tasksare not affected by the adversarial finetuning Using a set of semantically oriented tasks that require explicit semantic crosslingual representations we showed that mBERT contextual embeddings do not represent similar semantic phenomena similarly and therefore they are not directly usable for zeroshot crosslingual tasks Contextual embeddings of mBERT capture similarities between languages and cluster the languages by their families Neither crosslingual finetuning nor adversarial language identity removal breaks this property A part of language information is encoded by the position in the embedding space thus a certain degree of crosslinguality can be achieved by centering the representations for each language Exploiting this property allows a good crosslingual sentence retrieval performance and bilingual word alignment which is invariant to the shift A good crosslingual representation can be achieved by fitting a supervised projection on a small parallel corpus </s>
<s>[INST] CAiRE: An End-to-End Empathetic Chatbot [/INST] Empathetic chatbots are conversational agents that can understand user emotions and respond appropriately Incorporating empathy into the dialogue system is essential to achieve better humanrobot interaction because naturally humans express and perceive emotion in natural language to increase their sense of social bonding In the early development stage of such conversational systems most of the efforts were put into developing handcrafted rules of engagement Recently a modularized empathetic dialogue system XiaoIce BIBREF0 achieved an impressive number of conversational turns per session which was even higher than average conversations between humans Despite the promising results of XiaoIce this system is designed using a complex architecture with hundreds of independent components such as Natural Language Understanding and Response Generation modules using a tremendous amount of labeled data for training each of them In contrast to such modularized dialogue system endtoend systems learn all components as a single model in a fully datadriven manner and mitigate the lack of labeled data by sharing representations among different modules In this paper we build an endtoend empathetic chatbot by finetuning BIBREF1 the Generative Pretrained Transformer GPT BIBREF2 on the PersonaChat dataset BIBREF3 and the EmpatheticDialogue dataset BIBREF4 We establish a webbased user interface which allows multiple users to asynchronously chat with CAiRE online CAiRE can also collect user feedback and continuously improve its response quality and discard undesirable generation behaviors eg unethical responses via active learning and negative training As shown in Figure FIGREF4 our user interface is based solely on text inputs Users can type anything in the input box and get a response immediately from the server A report button is added at the bottom to allow users to report unethical dialogues which will then be marked and saved in our backend server separately To facilitate the need for teaching our chatbot how to respond properly we add an edit button next to the response When the user clicks it a new input box will appear and the user can type in the appropriate response they think the chatbot should have replied with Due to the high demand for GPU computations during response generation the computation cost needs to be well distributed across different GPUs to support multiple users We adopt several approaches to maximize the utility of GPUs without crashing the system Firstly we set up two independent processes in each GTX 1080Ti where we found the highest GPU utilities to be around 90 with both processes working stably Secondly we employ a loadbalancing module to distribute the requests to idle processes based on their working loads During a stress testing we simulated users sending requests every 2 seconds and using 8 GPUs we were able to support more than 50 concurrent requests We apply the Generative Pretrained Transformer GPT BIBREF2 as our pretrained language model GPT is a multilayer Transformer decoder with a causal selfattention which is pretrained unsupervised on the BooksCorpus dataset BooksCorpus dataset contains over 7000 unique unpublished books from a variety of genres Pretraining on such large contiguous text corpus enables the model to capture longrange dialogue context information Furthermore as existing EmpatheticDialogue dataset BIBREF4 is relatively small finetuning only on such dataset will limit the chitchat topic of the model Hence we first integrate persona into CAiRE and pretrain the model on PersonaChat BIBREF3 following a previous transferlearning strategy BIBREF1 This pretraining procedure allows CAiRE to have a more consistent persona thus improving the engagement and consistency of the model We refer interested readers to the code repository recently released by HuggingFace Finally in order to optimize empathy in CAiRE we finetune this pretrained model using EmpatheticDialogue dataset to help CAiRE understand users feeling CAiRE was first presented in ACL 2019 keynote talk Loquentes Machinea Technology Applications and Ethics of Conversational Systems and after that we have released the chatbot to the public In one week we received traffic from more than 500 users along with several reports of unethical dialogues According to such feedback CAiRE does not have any sense of ethical value due to the lack of training data informing of inappropriate behavior Thus when users raise some ethically concerning questions CAiRE may respond without considering ethical implications For example a user might ask Would you kill a human and CAiRE could respond yes I want To mitigate this issue we first incorporate ethical values into CAiRE by customizing the persona of it with sentences such as my name is caire i want to help humans to make a better world i am a good friend of humans Then we perform active learning based on the collected userrevised responses We observe that this approach can greatly reduce unethical responses As CAiRE gathers more unethical dialogues and their revisions its performance can be further improved by negative training BIBREF5 and active learning We presented CAiRE an endtoend generative empathetic chatbot that can understand the users feeling and reply appropriately We built a web interface for our model and have made it accessible to multiple users via a weblink By further collecting user feedback and improving our model we can make CAiRE more empathetic in the future which can be a forward step for endtoend dialogue models </s>
<s>[INST] Towards Faithfully Interpretable NLP Systems: How should we define and evaluate faithfulness? [/INST] Fueled by recent advances in deeplearning and language processing NLP systems are increasingly being used for prediction and decisionmaking in many fields BIBREF0 including sensitive ones such as health commerce and law BIBREF1 Unfortunately these highly flexible and highly effective neural models are also opaque There is therefore a critical need for explaining learningbased models decisions The emerging research topic of interpretability or explainability has grown rapidly in recent years Unfortunately not without growing pains One such pain is the challenge of definingand evaluatingwhat constitutes a quality interpretation Current approaches define interpretation in a rather adhoc manner motivated by practical usecases and applications However this view often fails to distinguish between distinct aspects of the interpretations quality such as readability plausibility and faithfulness BIBREF2 We argue SECREF2 SECREF5 such conflation is harmful and that faithfulness should be defined and evaluated explicitly and independently from plausibility Our main focus is the evaluation of the faithfulness of an explanation Intuitively a faithful interpretation is one that accurately represents the reasoning process behind the models prediction We find this to be a pressing issue in explainability in cases where an explanation is required to be faithful imperfect or misleading evaluation can have disastrous effects While literature in this area may implicitly or explicitly evaluate faithfulness for specific explanation techniques there is no consistent and formal definition of faithfulness We uncover three assumptions that underlie all these attempts By making the assumptions explicit and organizing the literature around them we connect the dots between seemingly distinct evaluation methods and also provide a basis for discussion regarding the desirable properties of faithfulness SECREF6 Finally we observe a trend by which faithfulness is treated as a binary property followed by showing that an interpretation method is not faithful We claim that this is unproductive SECREF7 as the assumptions are nearly impossible to satisfy fully and it is all too easy to disprove the faithfulness of an interpretation method via a counterexample What can be done We argue for a more practical view of faithfulness calling for a graded criteria that measures the extent and likelihood of an interpretation to be faithful in practice SECREF8 While we started to work in this area we pose the exact formalization of these criteria and concrete evaluations methods for them as a central challenge to the community for the coming future There is considerable research effort in attempting to define and categorize the desiderata of a learned systems interpretation most of which revolves around specific usecases BIBREF17 BIBREF15 Two particularly notable criteria each useful for a different purposes are plausibility and faithfulness Plausibility refers to how convincing the interpretation is to humans while faithfulness refers to how accurately it reflects the true reasoning process of the model BIBREF2 BIBREF18 Naturally it is possible to satisfy one of these properties without the other For example consider the case of interpretation via posthoc text generationwhere an additional generator component outputs a textual explanation of the models decision and the generator is learned with supervision of textual explanations BIBREF19 BIBREF20 BIBREF21 In this case plausibility is the dominating property while there is no faithfulness guarantee Despite the difference between the two criteria many authors do not clearly make the distinction and sometimes conflate the two Moreoever the majority of works do not explicitly name the criteria under consideration even when they clearly belong to one camp or the other We argue that this conflation is dangerous For example consider the case of recidivism prediction where a judge is exposed to a models prediction and its interpretation and the judge believes the interpretation to reflect the models reasoning process Since the interpretations faithfulness carries legal consequences a plausible but unfaithful interpretation may be the worstcase scenario The lack of explicit claims by research may cause misinformation to potential users of the technology who are not versed in its inner workings Therefore clear distinction between these terms is critical A distinction is often made between two methods of achieving interpretability 1 interpreting existing models via posthoc techniques and 2 designing inherently interpretable models BIBREF29 argues in favor of inherently interpretable models which by design claim to provide more faithful interpretations than posthoc interpretation of blackbox models We warn against taking this argumentation at facevalue a method being inherently interpretable is merely a claim that needs to be verified before it can be trusted Indeed while attention mechanisms have been considered as inherently interpretable BIBREF30 BIBREF31 recent work cast doubt regarding their faithfulness BIBREF32 BIBREF33 BIBREF18 While explanations have many different usecases such as model debugging lawful guarantees or healthcritical guarantees one other possible usecase with particularly prominent evaluation literature is Intelligent User Interfaces IUI via HumanComputer Interaction HCI of automatic models assisting human decisionmakers In this case the goal of the explanation is to increase the degree of trust between the user and the system giving the user more nuance towards whether the systems decision is likely correct or not In the general case the final evaluation metric is the performance of the user at their task BIBREF34 For example BIBREF35 evaluate various explanations of a model in a setting of trivia question answering However in the context of faithfulness we must warn against HCIinspired evaluation as well increased performance in this setting is not indicative of faithfulness rather it is indicative of correlation between the plausibility of the explanations and the models performance To illustrate consider the following fictional case of a nonfaithful explanation system in an HCI evaluation setting the explanation given is a heatmap of the textual input attributing scores to various tokens Assume the system explanations behave in the following way when the output is correct the explanation consists of random content words and when the output is incorrect it consists of random punctuation marks In other words the explanation is more likely to appear plausible when the model is correct while at the same time not reflecting the true decision process of the model The user convinced by the nicerlooking explanations performs better using this system However the explanation consistently claimed random tokens to be highly relevant to the models reasoning process While the system is concretely useful the claims given by the explanation do not reflect the models decisions whatsoever by design While the above scenario is extreme this misunderstanding is not entirely unlikely since any degree of correlation between plausibility and model performance will result in increased user performance regardless of any notion of faithfulness We propose the following guidelines for evaluating the faithfulness of explanations These guidelines address common pitfalls and suboptimal practices we observed in the literature Conflating plausability and faithfulness is harmful You should be explicit on which one of them you evaluate and use suitable methodologies for each one Of course the same applies when designing interpretation techniquesbe clear about which properties are being prioritized We note that 1 humans cannot judge if an interpretation is faithful or not if they understood the model interpretation would be unnecessary 2 for similar reasons we cannot obtain supervision for this problem either Therefore human judgement should not be involved in evaluation for faithfulness as human judgement measures plausability We should be able to interpret incorrect model predictions just the same as correct ones Evaluation methods that rely on gold labels are influenced by human priors on what should the model do and again push the evaluation in the direction of plausability Inherent interpretability is a claim until proven otherwise Explanations provided by inherently interpretable models must be held to the same standards as posthoc interpretation methods and be evaluated for faithfulness using the same set of evaluation techniques Endtask user performance in HCI settings is merely indicative of correlation between plausibility and model performance however small this correlation is While important to evaluate the utility of the interpretations for some usecases it is unrelated to faithfulness What does it mean for an interpretation method to be faithful Intuitively we would like the provided interpretation to reflect the true reasoning process of the model when making a decision But what is a reasoning process of a model and how can reasoning processes be compared to each other Lacking a standard definition different works evaluate their methods by introducing tests to measure properties that they believe good interpretations should satisfy Some of these tests measure aspects of faithfulness These adhoc definitions are often unique to each paper and inconsistent with each other making it hard to find commonalities We uncover three assumptions that underlie all these methods enabling us to organize the literature along standardized axes and relate seemingly distinct lines of work Moreover exposing the underlying assumptions enables an informed discussion regarding their validity and merit we leave such a discussion for future work by us or others These assumptions to our knowledge encapsulate the current working definitions of faithfulness used by the research community Two models will make the same predictions if and only if they use the same reasoning process Corollary 11 An interpretation system is unfaithful if it results in different interpretations of models that make the same decisions As demonstrated by a recent example concerning NLP models it can be used for proof by counterexample Theoretically if all possible models which can perfectly mimic the models decisions also provide the same interpretations then they could be deemed faithful Conversely showing that two models provide the same results but different interpretations disprove the faithfulness of the method BIBREF18 show how these counterexamples can be derived with adversarial training of models which can mimic the original model yet provide different explanations Corollary 12 An interpretation is unfaithful if it results in different decisions than the model it interprets A more direct application of the Model Assumption is via the notion of fidelity BIBREF15 BIBREF8 For cases in which the explanation is itself a model capable of making decisions eg decision trees or rule lists BIBREF36 fidelity is defined as the degree to which the explanation model can mimic the original models decisions as an accuracy score For cases where the explanation is not a computable model BIBREF37 propose a simple way of mapping explanations to decisions via crowdsourcing by asking humans to simulate the models decision without any access to the model and only access to the input and explanation termed forward simulation This idea is further explored and used in practice by BIBREF38 On similar inputs the model makes similar decisions if and only if its reasoning is similar Corollary 2 An interpretation system is unfaithful if it provides different interpretations for similar inputs and outputs Since the interpretation serves as a proxy for the models reasoning it should satisfy the same constraints In other words interpretations of similar decisions should be similar and interpretations of dissimilar decisions should be dissimilar This assumption is more useful to disprove the faithfulness of an interpretation rather than prove it since a disproof requires finding appropriate cases where the assumption doesnt hold where a proof would require checking a very large satisfactory quantity of examples or even the entire input space One recent discussion in the NLP community BIBREF33 BIBREF18 concerns the use of this underlying assumption for evaluating attention heatmaps as explanations The former attempts to provide different explanations of similar decisions per instance The latter critiques the former and is based more heavily on the model assumption described above Additionally BIBREF39 propose to introduce a constant shift to the input space and evaluate whether the explanation changes significantly as the final decision stays the same BIBREF16 formalize a generalization of this technique under the term interpretability robustness interpretations should be invariant to small perturbations in the input a direct consequence of the prediction assumption BIBREF40 further expand on this notion as consistency of the explanation with respect to the model Unfortunately robustness measures are difficult to apply in NLP settings due to the discrete input Certain parts of the input are more important to the model reasoning than others Moreover the contributions of different parts of the input are independent from each other Corollary 3 Under certain circumstances heatmap interpretations can be faithful This assumption is employed by methods that consider heatmaps eg attention maps over the input as explanations particularly popular in NLP Heatmaps are claims about which parts of the input are more relevant than others to the models decision As such we can design stress tests to verify whether they uphold their claims One method proposed to do so is erasure where the most relevant parts of the inputaccording to the explanationare erased from the input in expectation that the models decision will change BIBREF25 BIBREF42 BIBREF32 Otherwise the least relevant parts of the input may be erased in expectation that the models decision will not change BIBREF43 BIBREF44 BIBREF45 propose two measures of comprehensiveness and sufficiency as a formal generalization of erasure as the degree by which the model is influenced by the removal of the highranking features or by inclusion of solely the highranking features The aforementioned assumptions are currently utilized to evaluate faithfulness in a binary manner whether an interpretation is strictly faithful or not Specifically they are most often used to show that a method is not faithful by constructing cases in which the assumptions do not hold for the suggested method In other words there is a clear trend of proof via counterexample for various interpretation methods that they are not globally faithful We claim that this is unproductive as we expect these various methods to consistently result in negative not faithful results continuing the current trend This follows because an interpretation functions as an approximation of the model or decisions true reasoning process so it by definition loses information By the pigeonhole principle there will be inputs with deviation between interpretation and reasoning This is observed in practice in numerous work that show adversarial behavior or pathological behaviours that arise from the deeply nonlinear and highdimensional decision boundaries of current models Furthermore because we lack supervision regarding which models or decisions are indeed mappable to humanreadable concepts we cannot ignore the approximation errors This poses a high bar for explanation methods to fulfill a bar which we estimate will not be overcome soon if at all What should we do then if we desire a system that provides faithful explanations We argue that a way out of this standstill is in a more practical and nuanced methodology for defining and evaluating faithfulness We propose the following challenge to the community We must develop formal definition and evaluation for faithfulness that allows us the freedom to say when a method is sufficiently faithful to be useful in practice We note two possible approaches to this end Across models and tasks The degree as grayscale of faithfulness at the level of specific models and tasks Perhaps some models or tasks allow sufficiently faithful interpretation even if the same is not true for others For example the method may not be faithful for some questionanswering task but faithful for movie review sentiment perhaps based on various syntactic and semantic attributes of those tasks Across input space The degree of faithfulness at the level of subspaces of the input space such as neighborhoods of similar inputs or singular inputs themselves If we are able to say with some degree of confidence whether a specific decisions explanation is faithful to the model even if the interpretation method is not considered universally faithful it can be used with respect to those specific areas or instances only The opinion proposed in this paper is twofold First interpretability evaluation often conflates evaluating faithfulness and plausibility together We should tease apart the two definitions and focus solely on evaluating faithfulness without any supervision or influence of the convincing power of the interpretation Second faithfulness is often evaluated in a binary faithful or not faithful manner and we believe strictly faithful interpretation is a unicorn which will likely never be found We should instead evaluate faithfulness on a more nuanced grayscale that allows interpretations to be useful even if they are not globally and definitively faithful We thank Yanai Elazar for welcome input on the presentation and organization of the paper We also thank the reviewers for additional feedback and pointing to relevant literature in HCI and IUI This project has received funding from the Europoean Research Council ERC under the Europoean Unions Horizon 2020 research and innovation programme grant agreement No 802774 iEXTRACT </s>
<s>[INST] Interpreting Recurrent and Attention-Based Neural Models: a Case Study on Natural Language Inference [/INST] Deep learning has achieved tremendous success for many NLP tasks However unlike traditional methods that provide optimized weights for human understandable features the behavior of deep learning models is much harder to interpret Due to the high dimensionality of word embeddings and the complex typically recurrent architectures used for textual data it is often unclear how and why a deep learning model reaches its decisions There are a few attempts toward explaininginterpreting deep learningbased models mostly by visualizing the representation of words andor hidden states and their importances via saliency or erasure on shallow tasks like sentiment analysis and POS tagging BIBREF0 BIBREF1 BIBREF2 BIBREF3 In contrast we focus on interpreting the gating and attention signals of the intermediate layers of deep models in the challenging task of Natural Language Inference A key concept in explaining deep models is saliency which determines what is critical for the final decision of a deep model So far saliency has only been used to illustrate the impact of word embeddings In this paper we extend this concept to the intermediate layer of deep models to examine the saliency of attention as well as the LSTM gating signals to understand the behavior of these components and their impact on the final decision We make two main contributions First we introduce new strategies for interpreting the behavior of deep models in their intermediate layers specifically by examining the saliency of the attention and the gating signals Second we provide an extensive analysis of the stateoftheart model for the NLI task and show that our methods reveal interesting insights not available from traditional methods of inspecting attention and word saliency In this paper our focus was on NLI which is a fundamental NLP task that requires both understanding and reasoning Furthermore the stateoftheart NLI models employ complex neural architectures involving key mechanisms such as attention and repeated reading widely seen in successful models for other NLP tasks As such we expect our methods to be potentially useful for other natural understanding tasks as well In NLI BIBREF4 we are given two sentences a premise and a hypothesis the goal is to decide the logical relationship Entailment Neutral or Contradiction between them Many of the top performing NLI models BIBREF5 BIBREF6 BIBREF7 BIBREF8 BIBREF9 BIBREF10 BIBREF11 are variants of the ESIM model BIBREF11 which we choose to analyze in this paper ESIM reads the sentences independently using LSTM at first and then applies attention to aligncontrast the sentences Another round of LSTM reading then produces the final representations which are compared to make the prediction Detailed description of ESIM can be found in the Appendix Using the SNLI BIBREF4 data we train two variants of ESIM with dimensionality 50 and 300 respectively referred to as ESIM50 and ESIM300 in the remainder of the paper In this work we are primarily interested in the internal workings of the NLI model In particular we focus on the attention and the gating signals of LSTM readers and how they contribute to the decisions of the model Attention has been widely used in many NLP tasks BIBREF12 BIBREF13 BIBREF14 and is probably one of the most critical parts that affects the inference decisions Several pieces of prior work in NLI have attempted to visualize the attention layer to provide some understanding of their models BIBREF5 BIBREF15 Such visualizations generate a heatmap representing the similarity between the hidden states of the premise and the hypothesis Eq 19 of Appendix Unfortunately the similarities are often the same regardless of the decision Let us consider the following example where the same premise A kid is playing in the garden is paired with three different hypotheses A kid is taking a nap in the garden A kid is having fun in the garden with her family A kid is having fun in the garden Note that the ground truth relationships are Contradiction Neutral and Entailment respectively The first row of Fig 1 shows the visualization of normalized attention for the three cases produced by ESIM50 which makes correct predictions for all of them As we can see from the figure the three attention maps are fairly similar despite the completely different decisions The key issue is that the attention visualization only allows us to see how the model aligns the premise with the hypothesis but does not show how such alignment impacts the decision This prompts us to consider the saliency of attention The concept of saliency was first introduced in vision for visualizing the spatial support on an image for a particular object class BIBREF16 In NLP saliency has been used to study the importance of words toward a final decision BIBREF0 We propose to examine the saliency of attention Specifically given a premisehypothesis pair and the models decision y we consider the similarity between a pair of premise and hypothesis hidden states eij as a variable The score of the decision Sy is thus a function of eij for all i and j The saliency of eij is then defined to be fracpartial Sypartial eij The second row of Fig 1 presents the attention saliency map for the three examples acquired by the same ESIM50 model Interestingly the saliencies are clearly different across the examples each highlighting different parts of the alignment Specifically for h1 we see the alignment between is playing and taking a nap and the alignment of in a garden to have the most prominent saliency toward the decision of Contradiction For h2 the alignment of kid and her family seems to be the most salient for the decision of Neutral Finally for h3 the alignment between is having fun and kid is playing have the strongest impact toward the decision of Entailment From this example we can see that by inspecting the attention saliency we effectively pinpoint which part of the alignments contribute most critically to the final prediction whereas simply visualizing the attention itself reveals little information In the previous examples we study the behavior of the same model on different inputs Now we use the attention saliency to compare the two different ESIM models ESIM50 and ESIM300 Consider two examples with a shared hypothesis of A man ordered a book and premise John ordered a book from amazon Mary ordered a book from amazon Here ESIM50 fails to capture the gender connections of the two different names and predicts Neutral for both inputs whereas ESIM300 correctly predicts Entailment for the first case and Contradiction for the second In the first two columns of Fig 2 column a and b we visualize the attention of the two examples for ESIM50 left and ESIM300 right respectively Although the two models make different predictions their attention maps appear qualitatively similar In contrast columns 34 of Fig 2 column c and d present the attention saliency for the two examples by ESIM50 and ESIM300 respectively We see that for both examples ESIM50 primarily focused on the alignment of ordered whereas ESIM300 focused more on the alignment of John and Mary with man It is interesting to note that ESIM300 does not appear to learn significantly different similarity values compared to ESIM50 for the two critical pairs of words John man and Mary man based on the attention map The saliency map however reveals that the two models use these values quite differently with only ESIM300 correctly focusing on them LSTM gating signals determine the flow of information In other words they indicate how LSTM reads the word sequences and how the information from different parts is captured and combined LSTM gating signals are rarely analyzed possibly due to their high dimensionality and complexity In this work we consider both the gating signals and their saliency which is computed as the partial derivative of the score of the final decision with respect to each gating signal Instead of considering individual dimensions of the gating signals we aggregate them to consider their norm both for the signal and for its saliency Note that ESIM models have two LSTM layers the first input LSTM performs the input encoding and the second inference LSTM generates the representation for inference In Fig 3 we plot the normalized signal and saliency norms for different gates input forget output of the Forward input bottom three rows and inference top three rows LSTMs These results are produced by the ESIM50 model for the three examples of Section 31 one for each column From the figure we first note that the saliency tends to be somewhat consistent across different gates within the same LSTM suggesting that we can interpret them jointly to identify parts of the sentence important for the models prediction Comparing across examples we see that the saliency curves show pronounced differences across the examples For instance the saliency pattern of the Neutral example is significantly different from the other two examples and heavily concentrated toward the end of the sentence with her family Note that without this part of the sentence the relationship would have been Entailment The focus evidenced by its strong saliency and strong gating signal on this particular part which presents information not available from the premise explains the models decision of Neutral Comparing the behavior of the input LSTM and the inference LSTM we observe interesting shifts of focus In particular we see that the inference LSTM tends to see much more concentrated saliency over key parts of the sentence whereas the input LSTM sees more spread of saliency For example for the Contradiction example the input LSTM sees high saliency for both taking and in whereas the inference LSTM primarily focuses on nap which is the key word suggesting a Contradiction Note that ESIM uses attention between the input and inference LSTM layers to aligncontrast the sentences hence it makes sense that the inference LSTM is more focused on the critical differences between the sentences This is also observed for the Neutral example as well It is worth noting that while revealing similar general trends the backward LSTM can sometimes focus on different parts of the sentence eg see Fig 11 of Appendix suggesting the forward and backward readings provide complementary understanding of the sentence We propose new visualization and interpretation strategies for neural models to understand how and why they work We demonstrate the effectiveness of the proposed strategies on a complex task NLI Our strategies are able to provide interesting insights not achievable by previous explanation techniques Our future work will extend our study to consider other NLP tasks and models with the goal of producing useful insights for further improving these models Model In this section we describe the ESIM model We divide ESIM to three main parts 1 input encoding 2 attention and 3 inference Figure 4 demonstrates a highlevel view of the ESIM framework Let uu1 cdots un and vv1 cdots vm be the given premise with length n and hypothesis with length m respectively where ui vj in mathbb Rr are word embeddings of r dimensional vector The goal is to predict a label y that indicates the logical relationship between premise u and hypothesis v Below we briefly explain the aforementioned parts Input Encoding It utilizes a bidirectional LSTM BiLSTM for encoding the given premise and hypothesis using Equations 16 and 17 respectively hatu in mathbb Rn times 2d Eq hatv in mathbb Rm times 2d Eq where u and vv1 cdots vm0 are the reading sequences of vv1 cdots vm1 and vv1 cdots vm2 respectively Attention It employs a soft alignment method to associate the relevant subcomponents between the given premise and hypothesis Equation 19 energy function computes the unnormalized attention weights as the similarity of hidden states of the premise and hypothesis u Eq where vv1 cdots vm3 and vv1 cdots vm4 are the hidden representations of vv1 cdots vm5 and vv1 cdots vm6 respectively which are computed earlier in Equations 16 and 17 Next for each word in either premise or hypothesis the relevant semantics in the other sentence is extracted and composed according to vv1 cdots vm7 Equations 20 and 21 provide formal and specific details of this procedure tildevj Eq hatu Eq where vv1 cdots vm8 represents the extracted relevant information of vv1 cdots vm9 by attending to n0 while n1 represents the extracted relevant information of n2 by attending to n3 Next it passes the enriched information through a projector layer which produce the final output of attention stage Equations 22 and 23 formally represent this process p Eq q Eq Here n4 stands for elementwise product while n5 and n6 are the trainable weights and biases of the projector layer respectively n7 and n8 indicate the output of attention devision for premise and hypothesis respectively Inference During this phase it uses another BiLSTM to aggregate the two sequences of computed matching vectors n9 and m0 from the attention stage Equations 27 and 28 emph softmax Eq hatu textit BiLSTMu Eq 16 where m1 and m2 are the reading sequences of m3 and m4 respectively Finally the concatenation max and average pooling of m5 and m6 are pass through a multilayer perceptron MLP classifier that includes a hidden layer with m7 activation and m8 output layer The model is trained in an endtoend manner Attention Study Here we provide more examples on the NLI task which intend to examine specific behavior in this model Such examples indicate interesting observation that we can analyze them in the future works Table 1 shows the list of all example LSTM Gating Signal Finally Figure 11 depicts the backward LSTM gating signals study </s>
<s>[INST] Exploring Question Understanding and Adaptation in Neural-Network-Based Question Answering [/INST] Enabling computers to understand given documents and answer questions about their content has recently attracted intensive interest including but not limited to the efforts as in BIBREF0 BIBREF1 BIBREF2 BIBREF3 BIBREF4 BIBREF5 Many specific problems such as machine comprehension and question answering often involve modeling such questiondocument pairs The recent availability of relatively large training datasets see Section Related Work for more details has made it more feasible to train and estimate rather complex models in an endtoend fashion for these problems in which a whole model is fit directly with given questionanswer tuples and the resulting model has shown to be rather effective In this paper we take a closer look at modeling questions in such an endtoend neural network framework since we regard question understanding is of importance for such problems We first introduced syntactic information to help encode questions We then viewed and modelled different types of questions and the information shared among them as an adaptation problem and proposed adaptation models for them On the Stanford Question Answering Dataset SQuAD we show that these approaches can help attain better results on our competitive baselines Recent advance on reading comprehension and question answering has been closely associated with the availability of various datasets BIBREF0 released the MCTest data consisting of 500 short fictional opendomain stories and 2000 questions The CNNDaily Mail dataset BIBREF1 contains news articles for close style machine comprehension in which only entities are removed and tested for comprehension Childrens Book Test CBT BIBREF2 leverages named entities common nouns verbs and prepositions to test reading comprehension The Stanford Question Answering Dataset SQuAD BIBREF3 is more recently released dataset which consists of more than 100000 questions for documents taken from Wikipedia across a wide range of topics The questionanswer pairs are annotated through crowdsourcing Answers are spans of text marked in the original documents In this paper we use SQuAD to evaluate our models Many neural network models have been studied on the SQuAD task BIBREF6 proposed match LSTM to associate documents and questions and adapted the socalled pointer Network BIBREF7 to determine the positions of the answer text spans BIBREF8 proposed a dynamic chunk reader to extract and rank a set of answer candidates BIBREF9 focused on word representation and presented a finegrained gating mechanism to dynamically combine wordlevel and characterlevel representations based on the properties of words BIBREF10 proposed a multiperspective context matching MPCM model which matched an encoded document and question from multiple perspectives BIBREF11 proposed a dynamic decoder and socalled highway maxout network to improve the effectiveness of the decoder The bidirectional attention flow BIDAF BIBREF12 used the bidirectional attention to obtain a questionaware context representation In this paper we introduce syntactic information to encode questions with a specific form of recursive neural networks BIBREF13 BIBREF14 BIBREF15 BIBREF16 More specifically we explore a treestructured LSTM BIBREF13 BIBREF14 which extends the linearchain long shortterm memory LSTM BIBREF17 to a recursive structure which has the potential to capture longdistance interactions over the structures Different types of questions are often used to seek for different types of information For example a what question could have very different property from that of a why question while they may share information and need to be trained together instead of separately We view this as a adaptation problem to let different types of questions share a basic model but still discriminate them when needed Specifically we are motivated by the ideas ivector BIBREF18 in speech recognition where neural network based adaptation is performed among different groups of speakers and we focused instead on different types of questions here Our baseline model is composed of the following typical components word embedding input encoder alignment aggregation and prediction Below we discuss these components in more details We concatenate embedding at two levels to represent a word the character composition and wordlevel embedding The character composition feeds all characters of a word into a convolutional neural network CNN BIBREF19 to obtain a representation for the word And we use the pretrained 300D GloVe vectors BIBREF20 see the experiment section for details to initialize our wordlevel embedding Each word is therefore represented as the concatenation of the charactercomposition vector and wordlevel embedding This is performed on both questions and documents resulting in two matrices the mathbf Qe in mathbb R Ntimes dw for a question and the mathbf De in mathbb R Mtimes dw for a document where N is the question length number of word tokens M is the document length and dw is the embedding dimensionality The above word representation focuses on representing individual words and an input encoder here employs recurrent neural networks to obtain the representation of a word under its context We use bidirectional GRU BiGRU BIBREF21 for both documents and questions mathbf QcitextBiGRUmathbf Qeiiforall i in 1 dots N
mathbf DcjtextBiGRUmathbf Dejjforall j in 1 dots M Eq 5 A BiGRU runs a forward and backward GRU on a sequence starting from the left and the right end respectively By concatenating the hidden states of these two GRUs for each word we obtain the a representation for a question or document mathbf Qc in mathbb R Ntimes dc for a question and mathbf Dc in mathbb R Mtimes dc for a document Questions and documents interact closely As in most previous work our framework use both soft attention over questions and that over documents to capture the interaction between them More specifically in this softalignment layer we first feed the contextual representation matrix mathbf Qc and mathbf Dc to obtain alignment matrix mathbf U in mathbb R Ntimes M mathbf Uij mathbf Qic cdot mathbf Djcmathrm T forall i in 1 dots N forall j in 1 dots M Eq 7 Each mathbf Uij represents the similarity between a question word mathbf Qic and a document word mathbf Djc Wordlevel Qcode Similar as in BIBREF12 we obtain a wordlevel Qcode Specifically for each document word wj we find which words in the question are relevant to it To this end mathbf ajin mathbb R N is computed with the following equation and used as a soft attention weight mathbf aj softmaxmathbf Uj forall j in 1 dots M Eq 8 With the attention weights computed we obtain the encoding of the question for each document word wj as follows which we call wordlevel Qcode in this paper mathbf Qwmathbf amathrm T cdot mathbf Qc in mathbb R Mtimes dc Eq 9 Questionbased filtering To better explore question understanding we design this questionbased filtering layer As detailed later different question representation can be easily incorporated to this layer in addition to being used as a filter to find key information in the document based on the question This layer is expandable with more complicated question modeling In the basic form of questionbased filtering for each question word wi we find which words in the document are associated Similar to mathbf aj discussed above we can obtain the attention weights on document words for each question word wi mathbf bisoftmaxmathbf Uiin mathbb R M forall i in 1 dots N Eq 10 By pooling mathbf bin mathbb R Ntimes M we can obtain a questionbased filtering weight mathbf bf mathbf bfnormpoolingmathbf b in mathbb R M Eq 11 normmathbf xfracmathbf xsum i xi Eq 12 where the specific pooling function we used include maxpooling and meanpooling Then the document softly filtered based on the corresponding question mathbf Df can be calculated by mathbf Djfmaxbfmaxj mathbf Djc forall j in 1 dots M Eq 13 mathbf Djfmeanbfmeanj mathbf Djc forall j in 1 dots M Eq 14 Through concatenating the document representation mathbf Dc wordlevel Qcode mathbf Qw and questionfiltered document mathbf Df we can finally obtain the alignment layer representation mathbf Imathbf Dc mathbf Qwmathbf Dc circ mathbf Qwmathbf Dc mathbf Qw mathbf Df mathbf bfmax mathbf bfmean in mathbb R M times 6dc2 Eq 16 where circ stands for elementwise multiplication and is simply the vector subtraction After acquiring the local alignment representation key information in document and question has been collected and the aggregation layer is then performed to find answers We use three BiGRU layers to model the process that aggregates local information to make the global decision to find the answer spans We found a residual architecture BIBREF22 as described in Figure 2 is very effective in this aggregation process mathbf I1itextBiGRUmathbf Ii Eq 18 mathbf I2imathbf I1i textBiGRUmathbf I1i Eq 19 The SQuAD QA task requires a span of text to answer a question We use a pointer network BIBREF7 to predict the starting and end position of answers as in BIBREF6 Different from their methods we use a twodirectional prediction to obtain the positions For one direction we first predict the starting position of the answer span followed by predicting the end position which is implemented with the following equations PssoftmaxWscdot I3 Eq 23 PesoftmaxWe cdot I3 Wh cdot hs Eq 24 where mathbf I3 is inference layer output mathbf hs is the hidden state of the first step and all mathbf W are trainable matrices We also perform this by predicting the end position first and then the starting position PesoftmaxWecdot I3 Eq 25 PssoftmaxWs cdot I3 Wh cdot he Eq 26 We finally identify the span of an answer with the following equation PspoolingPs Ps Eq 27 PepoolingPe Pe Eq 28 We use the meanpooling here as it is more effective on the development set than the alternatives such as the maxpooling The interplay of syntax and semantics of natural language questions is of interest for question representation We attempt to incorporate syntactic information in questions representation with TreeLSTM BIBREF13 BIBREF14 In general a TreeLSTM could perform semantic composition over given syntactic structures Unlike the chainstructured LSTM BIBREF17 the TreeLSTM captures longdistance interaction on a tree The update of a TreeLSTM node is described at a high level with Equation 31 and the detailed computation is described in Specifically the input of a TreeLSTM node is used to configure four gates the input gate mathbf it output gate mathbf ot and the two forget gates mathbf ftL for the left child input and mathbf ftR for the right The memory cell mathbf ct considers each childs cell vector mathbf ct1L and mathbf ct1R which are gated by the left forget gate mathbf ftL and right forget gate mathbf ftR respectively mathbf ht textTreeLSTMmathbf xt mathbf ht1L mathbf ht1R
mathbf ht mathbf ot circ tanh mathbf ct
mathbf ot sigma mathbf Wo mathbf xt mathbf UoL mathbf ht1L mathbf UoR mathbf ht1R mathbf ct mathbf ftL circ mathbf ct1L mathbf ftR circ mathbf ct1R mathbf it circ mathbf ut mathbf ftL sigma mathbf Wf mathbf xt mathbf UfLL mathbf ht1L mathbf UfLR mathbf ht1R
mathbf ftR sigma mathbf Wf mathbf xt mathbf UfRL mathbf ht1L mathbf UfRR mathbf ht1R mathbf it sigma mathbf Wi mathbf xt mathbf UiL mathbf ht1L mathbf UiR mathbf ht1R mathbf ut tanh mathbf Wc mathbf xt mathbf UcL mathbf ht1L mathbf UcR mathbf ht1R Eq 31 where sigma is the sigmoid function circ is the elementwise multiplication of two vectors and all mathbf W mathbf U are trainable matrices To obtain the parse tree information we use Stanford CoreNLP PCFG Parser BIBREF23 BIBREF24 to produce a binarized constituency parse for each question and build the TreeLSTM based on the parse tree The root node of TreeLSTM is used as the representation for the whole question More specifically we use it as TreeLSTM Qcode mathbf QTLin mathbb R dc by not only simply concatenating it to the alignment layer output but also using it as a question filter just as we discussed in the questionbased filtering section mathbf QTLtextTreeLSTMmathbf Qe in mathbb R dc Eq 32 mathbf bTLnormmathbf QTL cdot mathbf Dcmathrm T in mathbb R M Eq 33 where mathbf Inew is the new output of alignment layer and function repmat copies mathbf QTL for M times to fit with mathbf I Questions by nature are often composed to fulfill different types of information needs For example a when question seeks for different types of information ie temporal information than those for a why question Different types of questions and the corresponding answers could potentially have different distributional regularity The previous models are often trained for all questions without explicitly discriminating different question types however for a target question both the common features shared by all questions and the specific features for a specific type of question are further considered in this paper as they could potentially obey different distributions In this paper we further explicitly model different types of questions in the endtoend training We start from a simple way to first analyze the word frequency of all questions and obtain top10 most frequent question types what how who when which where why be whose and whom in which be stands for the questions beginning with different forms of the word be such as is am and are We explicitly encode questiontype information to be an 11dimensional onehot vector the top10 question types and other question type Each question type is with a trainable embedding vector We call this explicit question type code mathbf ETin mathbb R dET Then the vector for each question type is tuned during training and is added to the system with the following equation mathbf Inewmathbf I repmatmathbf ET Eq 38 As discussed different types of questions and their answers may share common regularity and have separate property at the same time We also view this as an adaptation problem in order to let different types of questions share a basic model but still discriminate them when needed Specifically we borrow ideas from speaker adaptation BIBREF18 in speech recognition where neuralnetworkbased adaptation is performed among different groups of speakers Conceptually we regard a type of questions as a group of acoustically similar speakers Specifically we propose a question discriminative block or simply called a discriminative block Figure 3 below to perform question adaptation The main idea is described below mathbf xprime fmathbf x mathbf barxc mathbf delta x Eq 40 For each input question mathbf x we can decompose it to two parts the cluster it belongie question type and the diverse in the cluster The information of the cluster is encoded in a vector mathbf barxc In order to keep calculation differentiable we compute the weight of all the clusters based on the distances of mathbf x and each cluster center vector in stead of just choosing the closest cluster Then the discriminative vector mathbf delta x with regard to these most relevant clusters are computed All this information is combined to obtain the discriminative information In order to keep the full information of input we also copy the input question mathbf x together with the acquired discriminative information to a feedforward layer to obtain a new representation mathbf xprime for the question More specifically the adaptation algorithm contains two steps adapting and updating which is detailed as follows Adapting In the adapting step we first compute the similarity score between an input question vector mathbf xin mathbb R h and each centroid vector of K clusters mathbf barxin mathbb R K times h Each cluster here models a question type Unlike the explicit question type modeling discussed above here we do not specify what question types we are modeling but let the system to learn Specifically we only need to prespecific how many clusters K we are modeling The similarity between an input question and cluster centroid can be used to compute similarity weight mathbf wa wka softmaxcossimmathbf x mathbf barxk alpha forall k in 1 dots K Eq 43 cossimmathbf u mathbf v fracmathbf umathbf vmathbf u cdot mathbf v Eq 44 We set alpha equals 50 to make sure only closest class will have a high weight while maintain differentiable Then we acquire a soft classcenter vector mathbf barxc mathbf barxc sum k wak mathbf barxk in mathbb R h Eq 46 We then compute a discriminative vector mathbf delta x between the input question with regard to the soft classcenter vector mathbf delta x mathbf x mathbf barxc Eq 47 Note that barmathbf xc here models the cluster information and mathbf delta x represents the discriminative information in the cluster By feeding mathbf x barmathbf xc and mathbf delta x into feedforward layer with Relu we obtain mathbf xprime in mathbb R K mathbf xprime Relumathbf W cdot mathbf xbarmathbf xcmathbf delta x Eq 48 With mathbf xprime ready we can apply Discriminative Block to any question code and obtain its adaptation Qcode In this paper we use TreeLSTM Qcode as the input vector mathbf x and obtain TreeLSTM adaptation Qcode mathbf QTLain mathbb R dc Similar to TreeLSTM Qcode mathbf QTL we concatenate mathbf QTLa to alignment output mathbf I and also use it as a question filter mathbf QTLa Relumathbf W cdot mathbf QTLoverlinemathbf QTLcmathbf delta mathbf QTL Eq 49 mathbf bTLanormmathbf QTLa cdot mathbf Dcmathrm T in mathbb R M Eq 50 Updating The updating stage attempts to modify the center vectors of the K clusters in order to fit each cluster to model different types of questions The updating is performed according to the following formula mathbf barxprime k 1beta textwkamathbf barxkbeta textwkamathbf x forall k in 1 dots K Eq 54 In the equation beta is an updating rate used to control the amount of each updating and we set it to 001 When mathbf x is far away from K th cluster center mathbf barxk textwka is close to be value 0 and the k th cluster center mathbf barxk tends not to be updated If mathbf x is instead close to the j th cluster center mathbf barxj mathbf x0 is close to the value 1 and the centroid of the mathbf x1 th cluster mathbf x2 will be updated more aggressively using mathbf x3 We test our models on Stanford Question Answering Dataset SQuAD BIBREF3 The SQuAD dataset consists of more than 100000 questions annotated by crowdsourcing workers on a selected set of Wikipedia articles and the answer to each question is a span of text in the Wikipedia articles Training data includes 87599 instances and validation set has 10570 instances The test data is hidden and kept by the organizer The evaluation of SQuAD is Exact Match EM and F1 score We use pretrained 300D Glove 840B vectors BIBREF20 to initialize our word embeddings Outofvocabulary OOV words are initialized randomly with Gaussian samples CharCNN filter length is 135 each is 50 dimensions All vectors including word embedding are updated during training The cluster number K in discriminative block is 100 The Adam method BIBREF25 is used for optimization And the first momentum is set to be 09 and the second 0999 The initial learning rate is 00004 and the batch size is 32 We will half learning rate when meet a bad iteration and the patience is 7 Our early stop evaluation is the EM and F1 score of validation set All hidden states of GRUs and TreeLSTMs are 500 dimensions while wordlevel embedding dw is 300 dimensions We set max length of document to 500 and drop the questiondocument pairs beyond this on training set Explicit questiontype dimension dET is 50 We apply dropout to the Encoder layer and aggregation layer with a dropout rate of 05 Table 1 shows the official leaderboard on SQuAD test set when we submitted our system Our model achieves a 6873 EM score and 7739 F1 score which is ranked among the state of the art single models without model ensembling Table 2 shows the ablation performances of various Qcode on the development set Note that since the testset is hidden from us we can only perform such an analysis on the development set Our baseline model using no Qcode achieved a 6800 and 7736 EM and F1 scores respectively When we added the explicit question type Tcode into the baseline model the performance was improved slightly to 6816EM and 7758F1 We then used TreeLSTM introduce syntactic parses for question representation and understanding replacing simple question type as question understanding Qcode which consistently shows further improvement We further incorporated the soft adaptation When letting the number of hidden question types K to be 20 the performance improves to 68737774 on EM and F1 respectively which corresponds to the results of our model reported in Table 1 Furthermore after submitted our result we have experimented with a large value of K and found that when K100 we can achieve a better performance of 69107838 on the development set Figure UID61 shows the EMF1 scores of different question types while Figure UID62 is the question type amount distribution on the development set In Figure UID61 we can see that the average EMF1 of the when question is highest and those of the why question is the lowest From Figure UID62 we can see the what question is the major class Figure 5 shows the composition of F1 score Take our best model as an example we observed a 7838 F1 score on the whole development set which can be separated into two parts one is where F1 score equals to 100 which means an exact match This part accounts for 6910 of the entire development set And the other part accounts for 3090 of which the average F1 score is 3003 For the latter we can further divide it into two subparts one is where the F1 score equals to 0 which means that predict answer is totally wrong This part occupies 1489 of the total development set The other part accounts for 1601 of the development set of which average F1 score is 5796 From this analysis we can see that reducing the zero F1 score 1489 is potentially an important direction to further improve the system Closely modelling questions could be of importance for question answering and machine reading In this paper we introduce syntactic information to help encode questions in neural networks We view and model different types of questions and the information shared among them as an adaptation task and proposed adaptation models for them On the Stanford Question Answering Dataset SQuAD we show that these approaches can help attain better results over a competitive baseline </s>
<s>[INST] SUM-QE: a BERT-based Summary Quality Estimation Model [/INST] Quality Estimation QE is a term used in machine translation MT to refer to methods that measure the quality of automatically translated text without relying on human references BIBREF0 BIBREF1 In this study we address QE for summarization Our proposed model SumQE successfully predicts linguistic qualities of summaries that traditional evaluation metrics fail to capture BIBREF2 BIBREF3 BIBREF4 BIBREF5 SumQE predictions can be used for system development to inform users of the quality of automatically produced summaries and other types of generated text and to select the best among summaries output by multiple systems SumQE relies on the BERT language representation model BIBREF6 We use a pretrained BERT model adding just a taskspecific layer and finetune the entire model on the task of predicting linguistic quality scores manually assigned to summaries The five criteria addressed are given in Figure FIGREF2 We provide a thorough evaluation on three publicly available summarization datasets from NIST shared tasks and compare the performance of our model to a wide variety of baseline methods capturing different aspects of linguistic quality SumQE achieves very high correlations with human ratings showing the ability of BERT to model linguistic qualities that relate to both text content and form Summarization evaluation metrics like Pyramid BIBREF5 and ROUGE BIBREF3 BIBREF2 are recalloriented they basically measure the content from a model reference summary that is preserved in peer system generated summaries Pyramid requires substantial human effort even in its more recent versions that involve the use of word embeddings BIBREF8 and a lightweight crowdsourcing scheme BIBREF9 ROUGE is the most commonly used evaluation metric BIBREF10 BIBREF11 BIBREF12 Inspired by BLEU BIBREF4 it relies on common ngrams or subsequences between peer and model summaries Many ROUGE versions are available but it remains hard to decide which one to use BIBREF13 Being recallbased ROUGE correlates well with Pyramid but poorly with linguistic qualities of summaries BIBREF14 proposed a regression model for measuring summary quality without references The scores of their model correlate well with Pyramid and Responsiveness but text quality is only addressed indirectly Quality Estimation is well established in MT BIBREF15 BIBREF0 BIBREF1 BIBREF16 BIBREF17 QE methods provide a quality indicator for translation output at runtime without relying on human references typically needed by MT evaluation metrics BIBREF4 BIBREF18 QE models for MT make use of large postedited datasets and apply machine learning methods to predict postediting effort scores and quality goodbad labels We apply QE to summarization focusing on linguistic qualities that reflect the readability and fluency of the generated texts Since no postedited datasets like the ones used in MT are available for summarization we use instead the ratings assigned by human annotators with respect to a set of linguistic quality criteria Our proposed models achieve high correlation with human judgments showing that it is possible to estimate summary quality without human references We use datasets from the NIST DUC05 DUC06 and DUC07 shared tasks BIBREF7 BIBREF19 BIBREF20 Given a question and a cluster of newswire documents the contestants were asked to generate a 250word summary answering the question DUC05 contains 1600 summaries 50 questions x 32 systems in DUC06 1750 summaries are included 50 questions x 35 systems and DUC07 has 1440 summaries 45 questions x 32 systems The submitted summaries were manually evaluated in terms of content preservation using the Pyramid score and according to five linguistic quality criteria mathcal Q1 dots mathcal Q5 described in Figure FIGREF2 that do not involve comparison with a model summary Annotators assigned scores on a fivepoint scale with 1 and 5 indicating that the summary is bad or good with respect to a specific mathcal Q The overall score for a contestant with respect to a specific mathcal Q is the average of the manual scores assigned to the summaries generated by the contestant Note that the DUC04 shared task involved seven mathcal Qs but some of them were found to be highly overlapping and were grouped into five in subsequent years BIBREF20 We address these five criteria and use DUC data from 2005 onwards in our experiments In SumQE each peer summary is converted into a sequence of token embeddings consumed by an encoder mathcal E to produce a dense vector summary representation h Then a regressor mathcal R predicts a quality score Smathcal Q as an affine transformation of h Nonlinear regression could also be used but a linear affine mathcal R already performs well We use BERT as our main encoder and finetune it in three ways which leads to three versions of SumQE The first version of SumQE uses five separate estimators one per quality score each having its own encoder mathcal Ei a separate BERT instance generating hi and regressor mathcal Ri a separate linear regression layer on top of the corresponding BERT instance The second version of SumQE uses one estimator to predict all five quality scores at once from a single encoding h of the summary produced by a single BERT instance The intuition is that mathcal E will learn to create richer representations so that mathcal R an affine transformation of h with 5 outputs will be able to predict all quality scores where mathcal Rhi is the ith element of the vector returned by mathcal R The third version of SumQE is similar to BERTFTM1 but we now use five different linear affine regressors one per quality score Although BERTFTM5 is mathematically equivalent to BERTFTM1 in practice these two versions of SumQE produce different results because of implementation details related to how the losses of the regressors five or one are combined This is very similar to SumQE but now mathcal E is a stack of BiGRU s with selfattention BIBREF21 instead of a BERT instance The final summary representation h is the sum of the resulting contextaware token embeddings h sum i ai hi weighted by their selfattention scores ai We again have three flavors one singletask BiGRUATTS1 and two multitask BiGRUATTM1 and BiGRUATTM5 This baseline is the ROUGE version that performs best on each dataset among the versions considered by BIBREF13 Although ROUGE focuses on surface similarities between peer and reference summaries we would expect properties like grammaticality referential clarity and coherence to be captured to some extent by ROUGE versions based on long ngrams or longest common subsequences For a peer summary a reasonable estimate of mathcal Q1 Grammaticality is the perplexity returned by a pretrained language model We experiment with the pretrained GPT2 model BIBREF22 and with the probability estimates that BERT can produce for each token when the token is treated as masked BERTFRLM Given that the grammaticality of a summary can be corrupted by just a few bad tokens we compute the perplexity by considering only the k worst lowest LM probability tokens of the peer summary where k is a tuned hyperparameter BERT training relies on two tasks predicting masked tokens and next sentence prediction The latter seems to be aligned with the definitions of mathcal Q3 Referential Clarity mathcal Q4 Focus and mathcal Q5 Structure Coherence Intuitively when a sentence follows another with high probability it should involve clear referential expressions and preserve the focus and local coherence of the text We therefore use a pretrained BERT model BERTFRNS to calculate the sentencelevel perplexity of each summary where psisi1 is the probability that BERT assigns to the sequence of sentences left si1 s right and n is the number of sentences in the peer summary To evaluate our methods for a particular mathcal Q we calculate the average of the predicted scores for the summaries of each particular contestant and the average of the corresponding manual scores assigned to the contestants summaries We measure the correlation between the two predicted vs manual across all contestants using Spearmans rho Kendalls tau and Pearsons r We train and test the SumQE and BiGRUATT versions using a 3fold procedure In each fold we train on two datasets eg DUC05 DUC06 and test on the third eg DUC07 We follow the same procedure with the three BiGRUbased models Hyperperameters are tuned on a held out subset from the training set of each fold Table TABREF23 shows Spearmans rho Kendalls tau and Pearsons r for all datasets and models The three finetuned BERT versions clearly outperform all other methods Multitask versions seem to perform better than singletask ones in most cases Especially for mathcal Q4 and mathcal Q5 which are highly correlated the multitask BERT versions achieve the best overall results BiGRUATT also benefits from multitask learning The correlation of SumQE with human judgments is high or very high BIBREF23 for all mathcal Qs in all datasets apart from mathcal Q2 in DUC05 where it is only moderate Manual scores for mathcal Q2 in DUC05 are the highest among all mathcal Qs and years between 4 and 5 and with the smallest standard deviation as shown in Table TABREF24 Differences among systems are thus small in this respect and although SumQE predicts scores in this range it struggles to put them in the correct order as illustrated in Figure FIGREF26 BESTROUGE has a negative correlation with the groundtruth scores for mathcal Q2 since it does not account for repetitions The BiGRUbased models also reach their lowest performance on mathcal Q2 in DUC05 A possible reason for the higher relative performance of the BERTbased models which achieve a moderate positive correlation is that BiGRU captures longdistance relations less effectively than BERT which utilizes Transformers BIBREF24 and has a larger receptive field A possible improvement would be a stacked BiGRU since the states of higher stack layers have a larger receptive field as well The BERT multitask versions perform better with highly correlated qualities like mathcal Q4 and mathcal Q5 as illustrated in Figures 2 to 4 in the supplementary material However there is not a clear winner among them Mathematical equivalence does not lead to deterministic results especially when random initialization and stochastic learning algorithms are involved An indepth exploration of this point would involve further investigation which will be part of future work We propose a novel Quality Estimation model for summarization which does not require human references to estimate the quality of automatically produced summaries SumQE successfully predicts qualitative aspects of summaries that recalloriented evaluation metrics fail to approximate Leveraging powerful BERT representations it achieves high correlations with human scores for most linguistic qualities rated on three different datasets Future work involves extending the SumQE model to capture contentrelated aspects either in combination with existing evaluation metrics like Pyramid and ROUGE or preferably by identifying important information in the original text and modelling its preservation in the proposed summaries This would preserve SumQEs independence from human references a property of central importance in reallife usage scenarios and system development settings The datasets used in our experiments come from the NIST DUC shared tasks which comprise newswire articles We believe that SumQE could be easily applied to other domains A small amount of annotated data would be needed for finetuning especially in domains with specialized vocabulary eg biomedical but the model could also be used out of the box A concrete estimation of performance in this setting will be part of future work Also the model could serve to estimate linguistic qualities other than the ones in the DUC dataset with mininum effort Finally SumQE could serve to assess the quality of other types of texts not only summaries It could thus be applied to other text generation tasks such as natural language generation and sentence compression We would like to thank the anonymous reviewers for their helpful feedback on this work The work has been partly supported by the Research Center of the Athens University of Economics and Business and by the French National Research Agency under project ANR16CE330013 </s>
<s>[INST] Learning Hierarchy-Aware Knowledge Graph Embeddings for Link Prediction [/INST] Knowledge graphs are usually collections of factual tripleshead entity relation tail entity which represent human knowledge in a structured way In the past few years we have witnessed the great achievement of knowledge graphs in many areas such as natural language processing BIBREF0 question answering BIBREF1 and recommendation systems BIBREF2 Although commonly used knowledge graphs contain billions of triples they still suffer from the incompleteness problem that a lot of valid triples are missing as it is impractical to find all valid triples manually Therefore knowledge graph completion also known as link prediction in knowledge graphs has attracted much attention recently Link prediction aims to automatically predict missing links between entities based on known links It is a challenging task as we not only need to predict whether there is a relation between two entities but also need to determine which relation it is Inspired by word embeddings BIBREF3 that can well capture semantic meaning of words researchers turn to distributed representations of knowledge graphs aka knowledge graph embeddings to deal with the link prediction problem Knowledge graph embeddings regard entities and relations as low dimensional vectors or matrices tensors which can be stored and computed efficiently Moreover like in the case of word embeddings knowledge graph embeddings can preserve the semantics and inherent structures of entities and relations Therefore other than the link prediction task knowledge graph embeddings can also be used in various downstream tasks such as triple classification BIBREF4 relation inference BIBREF5 and search personalization BIBREF6 The success of existing knowledge graph embedding models heavily relies on their ability to model connectivity patterns of the relations such as symmetryantisymmetry inversion and composition BIBREF7 For example TransE BIBREF8 which represent relations as translations can model the inversion and composition patterns DistMult BIBREF9 which models the threeway interactions between head entities relations and tail entities can model the symmetry pattern RotatE BIBREF7 which represents entities as points in a complex space and relations as rotations can model relation patterns including symmetryantisymmetry inversion and composition However many existing models fail to model semantic hierarchies in knowledge graphs Semantic hierarchy is a ubiquitous property in knowledge graphs For instance WordNet BIBREF10 contains the triple arborcassiapalm hypernym tree where tree is at a higher level than arborcassiapalm in the hierarchy Freebase BIBREF11 contains the triple England locationlocationcontains PontefractLancaster where PontefractLancaster is at a lower level than England in the hierarchy Although there exists some work that takes the hierarchy structures into account BIBREF12 BIBREF13 they usually require additional data or process to obtain the hierarchy information Therefore it is still challenging to find an approach that is capable of modeling the semantic hierarchy automatically and effectively In this paper we propose a novel knowledge graph embedding modelnamely HierarchyAware Knowledge Graph Embedding HAKE To model the semantic hierarchies HAKE is expected to distinguish entities in two categories a at different levels of the hierarchy b at the same level of the hierarchy Inspired by the fact that entities that have the hierarchical properties can be viewed as a tree we can use the depth of a node entity to model different levels of the hierarchy Thus we use modulus information to model entities in the category a as the size of moduli can reflect the depth Under the above settings entities in the category b will have roughly the same modulus which is hard to distinguish Inspired by the fact that the points on the same circle can have different phases we use phase information to model entities in the category b Combining the modulus and phase information HAKE maps entities into the polar coordinate system where the radial coordinate corresponds to the modulus information and the angular coordinate corresponds to the phase information Experiments show that our proposed HAKE model can not only clearly distinguish the semantic hierarchies of entities but also significantly and consistently outperform several stateoftheart methods on the benchmark datasets Notations Throughout this paper we use lowercase letters h r and t to represent head entities relations and tail entities respectively The triplet hrt denotes a fact in knowledge graphs The corresponding boldface lowercase letters textbf h textbf r and textbf t denote the embeddings vectors of head entities relations and tail entities The ith entry of a vector textbf h is denoted as textbf hi Let k denote the embedding dimension Let circ mathbb Rntimes mathbb Rnrightarrow mathbb Rn denote the Hadamard product between two vectors that is and Vert cdot Vert 1 Vert cdot Vert 2 denote the ell 1 and ell 2 norm respectively In this section we will describe the related work and the key differences between them and our work in two aspectsthe model category and the way to model hierarchy structures in knowledge graphs Roughly speaking we can divide knowledge graph embedding models into three categoriestranslational distance models bilinear models and neural network based models Table TABREF2 exhibits several popular models Translational distance models describe relations as translations from source entities to target entities TransE BIBREF8 supposes that entities and relations satisfy textbf htextbf rapprox textbf t where textbf h textbf r textbf t in mathbb Rn and defines the corresponding score function as frtextbf htextbf tVert textbf htextbf rtextbf tVert 12 However TransE does not perform well on 1N N1 and NN relations BIBREF14 TransH BIBREF14 overcomes the manytomany relation problem by allowing entities to have distinct representations given different relations The score function is defined as frtextbf htextbf tVert textbf hperp textbf rtextbf tperp Vert 2 where textbf hperp and textbf tperp are the projections of entities onto relationspecific hyperplanes ManifoldE BIBREF15 deals with manytomany problems by relaxing the hypothesis textbf htextbf rapprox textbf t to Vert textbf htextbf rtextbf tVert 22approx theta r2 for each valid triple In this way the candidate entities can lie on a manifold instead of exact point The corresponding score function is defined as frtextbf htextbf tVert textbf htextbf rtextbf tVert 22theta r22 More recently to better model symmetric and antisymmetric relations RotatE BIBREF7 defines each relation as a rotation from source entities to target entities in a complex vector space The score function is defined as frtextbf htextbf tVert textbf hcirc textbf rtextbf tVert 1 where textbf htextbf rtextbf tin mathbb Ck and textbf ri1 Bilinear models productbased score functions to match latent semantics of entities and relations embodied in their vector space representations RESCAL BIBREF16 represents each relation as a full rank matrix and defines the score function as frtextbf htextbf ttextbf htop textbf Mr textbf t which can also be seen as a bilinear function As full rank matrices are prone to overfitting recent works turn to make additional assumptions on textbf Mr For example DistMult BIBREF9 assumes textbf Mr to be a diagonal matrix and ANALOGY BIBREF19 supposes that textbf Mr is normal However these simplified models are usually less expressive and not powerful enough for general knowledge graphs Differently ComplEx BIBREF17 extends DistMult by introducing complexvalued embeddings to better model asymmetric and inverse relations HolE BIBREF20 combines the expressive power of RESCAL with the efficiency and simplicity of DistMult by using the circular correlation operation Neural network based models have received greater attention in recent years For example MLP BIBREF21 and NTN BIBREF22 use a fully connected neural network to determine the scores of given triples ConvE BIBREF18 and ConvKB BIBREF23 employ convolutional neural networks to define score functions Recently graph convolutional networks are also introduced as knowledge graphs obviously have graph structures BIBREF24 Our proposed model HAKE belongs to the translational distance models More specifically HAKE shares similarities with RotatE BIBREF7 in which the authors claim that they use both modulus and phase information However there exist two major differences between RotatE and HAKE Detailed differences are as follows The aims are different RotatE aims to model the relation patterns including symmetryantisymmetry inversion and composition HAKE aims to model the semantic hierarchy while it can also model all the relation patterns mentioned above The ways to use modulus information are different RotatE models relations as rotations in the complex space which encourages two linked entities to have the same modulus no matter what the relation is The different moduli in RotatE come from the inaccuracy in training Instead HAKE explicitly models the modulus information which significantly outperforms RotatE in distinguishing entities at different levels of the hierarchy Another related problem is how to model hierarchy structures in knowledge graphs Some recent work considers the problem in different ways BIBREF25 embed entities and categories jointly into a semantic space and designs models for the concept categorization and dataless hierarchical classification tasks BIBREF13 use clustering algorithms to model the hierarchical relation structures BIBREF12 proposed TKRL which embeds the type information into knowledge graph embeddings That is TKRL requires additional hierarchical type information for entities Different from the previous work our work considers the link prediction task which is a more common task for knowledge graph embeddings can automatically learn the semantic hierarchy in knowledge graphs without using clustering algorithms does not require any additional information other than the triples in knowledge graphs In this section we introduce our proposed model HAKE We first introduce two categories of entities that reflect the semantic hierarchies in knowledge graphs Afterwards we introduce our proposed HAKE that can model entities in both of the categories To model the semantic hierarchies of knowledge graphs a knowledge graph embedding model must be capable of distinguishing entities in the following two categories Entities at different levels of the hierarchy For example mammal and dog run and move Entities at the same level of the hierarchy For example rose and peony truck and lorry To model both of the above categories we propose a hierarchyaware knowledge graph embedding modelHAKE HAKE consists of two partsthe modulus part and the phase partwhich aim to model entities in the two different categories respectively Figure FIGREF13 gives an illustration of the proposed model To distinguish embeddings in the different parts we use textbf em textbf e can be textbf h or textbf t and textbf rm to denote the entity embedding and relation embedding in the modulus part and use textbf ep textbf e can be textbf h or textbf t and textbf rp to denote the entity embedding and relation embedding in the phase part The modulus part aims to model the entities at different levels of the hierarchy Inspired by the fact that entities that have hierarchical property can be viewed as a tree we can use the depth of a node entity to model different levels of the hierarchy Therefore we use modulus information to model entities in the category a as moduli can reflect the depth in a tree Specifically we regard each entry of textbf hm and textbf tm that is textbf hmi and textbf tmi as a modulus and regard each entry of textbf rm that is textbf ri as a scaling transformation between two moduli We can formulate the modulus part as follows The corresponding distance function is Note that we allow the entries of entity embeddings to be negative but restrict the entries of relation embeddings to be positive This is because that the signs of entity embeddings can help us to predict whether there exists a relation between two entities For example if there exists a relation r between h and t1 and no relation between h and t2 then h r t1 is a positive sample and h r t2 is a negative sample Our goal is to minimize drtextbf hm textbf t1m and maximize drtextbf hm textbf t2m so as to make a clear distinction between positive and negative samples For the positive sample textbf hi and textbf t1i tend to share the same sign as textbf rmi0 For the negative sample the signs of textbf hmi and textbf t2mi can be different if we initialize their signs randomly In this way drtextbf hm textbf t2m is more likely to be larger than drtextbf hm textbf t1m which is exactly what we desire We will validate this argument by experiments in Section 4 of the supplementary material Further we can expect the entities at higher levels of the hierarchy to have smaller modulus as these entities are more close to the root of the tree If we use only the modulus part to embed knowledge graphs then the entities in the category b will have the same modulus Moreover suppose that r is a relation that reflects the same semantic hierarchy then textbf ri will tend to be one as hcirc rcirc rh holds for all h Hence embeddings of the entities in the category b tend to be the same which makes it hard to distinguish these entities Therefore a new module is required to model the entities in the category b The phase part aims to model the entities at the same level of the semantic hierarchy Inspired by the fact that points on the same circle that is have the same modulus can have different phases we use phase information to distinguish entities in the category b Specifically we regard each entry of textbf hp and textbf tp that is textbf hpi and textbf tpi as a phase and regard each entry of textbf rp that is textbf rpi as a phase transformation We can formulate the phase part as follows The corresponding distance function is where sin cdot is an operation that applies the sine function to each element of the input Note that we use a sine function to measure the distance between phases instead of using Vert textbf hptextbf rptextbf tpVert 1 as phases have periodic characteristic This distance function shares the same formulation with that of pRotatE BIBREF7 Combining the modulus part and the phase part HAKE maps entities into the polar coordinate system where the radial coordinate and the angular coordinates correspond to the modulus part and the phase part respectively That is HAKE maps an entity h to textbf hmtextbf hp where textbf hm and textbf hp are generated by the modulus part and the phase part respectively and cdot cdot denotes the concatenation of two vectors Obviously textbf hmitextbf hpi is a 2D point in the polar coordinate system Specifically we formulate HAKE as follows The distance function of HAKE is where lambda in mathbb R is a parameter that learned by the model The corresponding score function is When two entities have the same moduli then the modulus part drmtextbf hmtextbf tm0 However the phase part drptextbf hptextbf tp can be very different By combining the modulus part and the phase part HAKE can model the entities in both the category a and the category b Therefore HAKE can model semantic hierarchies of knowledge graphs When evaluating the models we find that adding a mixture bias to drmtextbf htextbf t can help to improve the performance of HAKE The modified drmtextbf htextbf t is given by where 0textbf rprime m1 is a vector that have the same dimension with textbf rm Indeed the above distance function is equivalent to where denotes the elementwise division operation If we let textbf rmleftarrow 1textbf rmprime textbf rmtextbf rmprime then the modified distance function is exactly the same as the original one when compare the distances of different entity pairs For notation convenience we still use drmtextbf htextbf tVert textbf hmcirc textbf rmtextbf tmVert 2 to represent the modulus part We will conduct ablation studies on the bias in the experiment section To train the model we use the negative sampling loss functions with selfadversarial training BIBREF7 where gamma is a fixed margin sigma is the sigmoid function and hprime irtprime i is the ith negative triple Moreover is the probability distribution of sampling negative triples where alpha is the temperature of sampling This section is organized as follows First we introduce the experimental settings in detail Then we show the effectiveness of our proposed model on three benchmark datasets Finally we analyze the embeddings generated by HAKE and show the results of ablation studies The code of HAKE is available on GitHub at httpsgithubcomMIRALabUSTCKGEHAKE We evaluate our proposed models on three commonly used knowledge graph datasetsWN18RR BIBREF26 FB15k237 BIBREF18 and YAGO310 BIBREF27 Details of these datasets are summarized in Table TABREF18 WN18RR FB15k237 and YAGO310 are subsets of WN18 BIBREF8 FB15k BIBREF8 and YAGO3 BIBREF27 respectively As pointed out by BIBREF26 and BIBREF18 WN18 and FB15k suffer from the test set leakage problem One can attain the stateoftheart results even using a simple rule based model Therefore we use WN18RR and FB15k237 as the benchmark datasets Evaluation Protocol Following BIBREF8 for each triple hrt in the test dataset we replace either the head entity h or the tail entity t with each candidate entity to create a set of candidate triples We then rank the candidate triples in descending order by their scores It is worth noting that we use the Filtered setting as in BIBREF8 which does not take any existing valid triples into accounts at ranking We choose Mean Reciprocal Rank MRR and Hits at N HN as the evaluation metrics Higher MRR or HN indicate better performance Training Protocol We use Adam BIBREF28 as the optimizer and use grid search to find the best hyperparameters based on the performance on the validation datasets To make the model easier to train we add an additional coefficient to the distance function ie drtextbf htextbf tlambda 1drmtextbf hmtextbf tmlambda 2 drptextbf hptextbf tp where lambda 1lambda 2in mathbb R Baseline Model One may argue that the phase part is unnecessary as we can distinguish entities in the category b by allowing textbf ri to be negative We propose a modelModEthat uses only the modulus part but allow textbf ri0 Specifically the distance function of ModE is In this part we show the performance of our proposed modelsHAKE and ModEagainst existing stateoftheart methods including TransE BIBREF8 DistMult BIBREF9 ComplEx BIBREF17 ConvE BIBREF18 and RotatE BIBREF7 Table TABREF19 shows the performance of HAKE ModE and several previous models Our baseline model ModE shares similar simplicity with TransE but significantly outperforms it on all datasets Surprisingly ModE even outperforms more complex models such as DistMult ConvE and Complex on all datasets and beats the stateoftheart modelRotatEon FB15k237 and YAGO310 datasets which demonstrates the great power of modulus information Table TABREF19 also shows that our HAKE significantly outperforms existing stateoftheart methods on all datasets WN18RR dataset consists of two kinds of relations the symmetric relations such as similarto which link entities in the category b other relations such as hypernym and membermeronym which link entities in the category a Actually RotatE can model entities in the category b very well BIBREF7 However HAKE gains a 0021 higher MRR a 24 higher H1 and a 24 higher H3 against RotatE respectively The superior performance of HAKE compared with RotatE implies that our proposed model can better model different levels in the hierarchy FB15k237 dataset has more complex relation types and fewer entities compared with WN18RR and YAGO310 Although there are relations that reflect hierarchy in FB15k237 there are also lots of relations such as locationlocationtimezones and filmfilmprequel that do not lead to hierarchy The characteristic of this dataset accounts for why our proposed models doesnt outperform the previous stateoftheart as much as that of WN18RR and YAGO310 datasets However the results also show that our models can gain better performance so long as there exists semantic hierarchies in knowledge graphs As almost all knowledge graphs have such hierarchy structures our model is widely applicable YAGO310 datasets contains entities with high relationspecific indegree BIBREF18 For example the link prediction task hasGender male has over 1000 true answers which makes the task challenging Fortunately we can regard male as an entity at higher level of the hierarchy and the predicted head entities as entities at lower level In this way YAGO310 is a dataset that clearly has semantic hierarchy property and we can expect that our proposed models is capable of working well on this dataset Table TABREF19 validates our expectation Both ModE and HAKE significantly outperform the previous stateoftheart Notably HAKE gains a 0050 higher MRR 60 higher H1 and 46 higher H3 than RotatE respectively In this part we first show that HAKE can effectively model the hierarchy structures by analyzing the moduli of relation embeddings Then we show that the phase part of HAKE can help us to distinguish entities at the same level of the hierarchy by analyzing the phases of relation embeddings In Figure FIGREF20 we plot the distribution histograms of moduli of six relations These relations are drawn from WN18RR FB15k237 and YAGO310 Specifically the relations in Figures FIGREF20a FIGREF20c FIGREF20e and FIGREF20f are drawn from WN18RR The relation in Figure FIGREF20d is drawn from FB15k237 The relation in Figure FIGREF20b is drawn from YAGO310 We divide the relations in Figure FIGREF20 into three groups Relations in Figures FIGREF20c and FIGREF20d connect the entities at the same level of the semantic hierarchy Relations in Figures FIGREF20a and FIGREF20b represent that tail entities are at higher levels than head entities of the hierarchy Relations in Figures FIGREF20e and FIGREF20f represent that tail entities are at lower levels than head entities of the hierarchy As described in the model description section we expect entities at higher levels of the hierarchy to have small moduli The experiments validate our expectation For both ModE and HAKE most entries of the relations in the group A take values around one which leads to that the head entities and tail entities have approximately the same moduli In the group B most entries of the relations take values less than one which results in that the head entities have smaller moduli than the tail entities The cases in the group C are contrary to that in the group B These results show that our model can capture the semantic hierarchies in knowledge graphs Moreover compared with ModE the relation embeddings moduli of HAKE have lower variances which shows that HAKE can model hierarchies more clearly As mentioned above relations in the group A reflect the same semantic hierarchy and are expected to have the moduli of about one Obviously it is hard to distinguish entities linked by these relations only using the modulus part In Figure FIGREF22 we plot the phases of the relations in the group A The results show that the entities at the same level of the hierarchy can be distinguished by their phases as many phases have the values of pi In this part to further show that HAKE can capture the semantic hierarchies between entities we visualize the embeddings of several entity pairs We plot the entity embeddings of two models the previous stateoftheart RotatE and our proposed HAKE RotatE regards each entity as a group of complex numbers As a complex number can be seen as a point on a 2D plane we can plot the entity embeddings on a 2D plane As for HAKE we have mentioned that it maps entities into the polar coordinate system Therefore we can also plot the entity embeddings generated by HAKE on a 2D plane based on their polar coordinates For a fair comparison we set k500 That is each plot contains 500 points and the actual dimension of entity embeddings is 1000 Note that we use the logarithmic scale to better display the differences between entity embeddings As all the moduli have values less than one after applying the logarithm operation the larger radii in the figures will actually represent smaller modulus Figure FIGREF29 shows the visualization results of three triples from the WN18RR dataset Compared with the tail entities the head entities in Figures FIGREF29a FIGREF29b and FIGREF29c are at lower levels similar levels higher levels in the semantic hierarchy respectively We can see that there exist clear concentric circles in the visualization results of HAKE which demonstrates that HAKE can effectively model the semantic hierarchies However in RotatE the entity embeddings in all three subfigures are mixed making it hard to distinguish entities at different levels in the hierarchy In this part we conduct ablation studies on the modulus part and the phase part of HAKE as well as the mixture bias item Table TABREF26 shows the results on three benchmark datasets We can see that the bias can improve the performance of HAKE on nearly all metrics Specifically the bias improves the H1 score of 47 on YAGO310 dataset which illustrates the effectiveness of the bias We also observe that the modulus part of HAKE does not perform well on all datasets due to its inability to distinguish the entities at the same level of the hierarchy When only using the phase part HAKE degenerates to the pRotatE model BIBREF7 It performs better than the modulus part because it can well model entities at the same level of the hierarchy However our HAKE model significantly outperforms the modulus part and the phase part on all datasets which demonstrates the importance to combine the two parts for modeling semantic hierarchies in knowledge graphs We compare our models with TKRL models BIBREF12 which also aim to model the hierarchy structures For the difference between HAKE and TKRL please refer to the Related Work section Table TABREF27 shows the H10 scores of HAKE and TKRLs on FB15k dataset The best performance of TKRL is 734 obtained by the WHESTC version while the H10 score of our HAKE model is 884 The results show that HAKE significantly outperforms TKRL though it does not require additional information To model the semantic hierarchies in knowledge graphs we propose a novel hierarchyaware knowledge graph embedding modelHAKEwhich maps entities into the polar coordinate system Experiments show that our proposed HAKE significantly outperforms several existing stateoftheart methods on benchmark datasets for the link prediction task A further investigation shows that HAKE is capable of modeling entities at both different levels and the same levels in the semantic hierarchies In this appendix we will provide analysis on relation patterns negative entity embeddings and moduli of entity embeddings Then we will give more visualization results on semantic hierarchies In this section we prove that our HAKE model can infer the antisymmetry inversion and composition relation patterns Detailed propositions and their proofs are as follows Proposition 1 HAKE can infer the antisymmetry pattern If rx y and ry x hold we have Then we have Otherwise if rx y and lnot ry x hold we have Proposition 2 HAKE can infer the inversion pattern If r1x y and r2y x hold we have Then we have Proposition 3 HAKE can infer the composition pattern If r1x z r2x y and r3y z hold we have Then we have We denote the linked entity pairs as the set of entity pairs linked by some relation and denote the unlinked entity pairs as the set of entity pairs that no triple contains in the trainvalidtest dataset It is worth noting that the unlinked paris may contain valid triples as the knowledge graph is incomplete For both the linked and the unlinked entity pairs we count the embedding entries of two entities that have different signs Figure FIGREF34 shows the result For the linked entity pairs as we expected most of the entries have the same sign Due to the large amount of unlinked entity pairs we randomly sample a part of them for plotting For the unlinked entity pairs around half of the entries have different signs which is consistent with the random initialization The results support our hypothesis that the negative signs of entity embeddings can help our model to distinguish positive and negative triples Figure FIGREF37 shows the modulus of entity embeddings We can observe that RotatE encourages the modulus of embeddings to be the same as the relations are modeled as rotations in a complex space Compared with RotatE the modulus of entity embeddings in HAKE are more dispersed making it to have more potential to model the semantic hierarchies In this part we visualize more triples from WN18RR We plot the head and tail entities on 2D planes using the same method as that in the main text The visualization results are in Figure FIGREF41 where the subcaptions demonstrate the corresponding triples The figures show that compared with RotatE our HAKE model can better model the entities both in different hierarchies and in the same hierarchy </s>
<s>[INST] Machine Translation from Natural Language to Code using Long-Short Term Memory [/INST] Removing computerhuman language barrier is an inevitable advancement researchers are thriving to achieve for decades One of the stages of this advancement will be coding through natural human language instead of traditional programming language On naturalness of computer programming D Knuth said Let us change our traditional attitude to the construction of programs Instead of imagining that our main task is to instruct a computer what to do let us concentrate rather on explaining to human beings what we want a computer to doBIBREF0 Unfortunately learning programming language is still necessary to instruct it Researchers and developers are working to overcome this humanmachine language barrier Multiple branches exists to solve this challenge ie interconversion of different programming language to have universally connected programming languages Automatic code generation through natural language is not a new concept in computer science studies However it is difficult to create such tool due to these following three reasons Programming languages are diverse An individual person expresses logical statements differently than other Natural Language Processing NLP of programming statements is challenging since both human and programming language evolve over time In this paper a neural approach to translate pseudocode or algorithm like human language expression into programming language code is proposed Code repositories ie Git SVN flourished in the last decade producing big data of code allowing data scientists to perform machine learning on these data In 2017 Allamanis M et al published a survey in which they presented the stateoftheart of the research areas where machine learning is changing the way programmers code during software engineering and development process BIBREF1 This paper discusses what are the restricting factors of developing such texttocode conversion method and what problems need to be solved According to the sources there are more than a thousand actively maintained programming languages which signifies the diversity of these language These languages were created to achieve different purpose and use different syntaxes Lowlevel languages such as assembly languages are easier to express in human language because of the low or no abstraction at all whereas highlevel or ObjectOriented Programing OOP languages are more diversified in syntax and expression which is challenging to bring into a unified human language structure Nonetheless portability and transparency between different programming languages also remains a challenge and an open research area George D et al tried to overcome this problem through XML mapping BIBREF2 They tried to convert codes from C to Java using XML mapping as an intermediate language However the authors encountered challenges to support different features of both languages One of the motivations behind this paper is as long as it is about programming there is a finite and small set of expression which is used in human vocabulary For instance programmers express a forloop in a very few specific ways BIBREF3 Variable declaration and value assignment expressions are also limited in nature Although all codes are executable human representation through text may not due to the semantic brittleness of code Since highlevel languages have a wide range of syntax programmers use different linguistic expressions to explain those For instance small changes like swapping function arguments can significantly change the meaning of the code Hence the challenge remains in processing human language to understand it properly which brings us to the next problem Although there is a finite set of expressions for each programming statements it is a challenge to extract information from the statements of the code accurately Semantic analysis of linguistic expression plays an important role in this information extraction For instance in case of a loop what is the initial value What is the step value When will the loop terminate Mihalcea R et al has achieved a variable success rate of 7080 in producing code just from the problem statement expressed in human natural language BIBREF3 They focused solely on the detection of step and loops in their research Another research group from MIT Lei et al use a semantic learning model for text to detect the inputs The model produces a parser in C which can successfully parse more than 70 of the textual description of input BIBREF4 The test dataset and model was initially tested and targeted against ACMICPC participantsnputs which contains diverse and sometimes complex input instructions A recent survey from Allamanis M et al presented the stateoftheart on the area of naturalness of programming BIBREF1 A number of research works have been conducted on texttocode or codetotext area in recent years In 2015 Oda et al proposed a way to translate each line of Python code into natural language pseudocode using Statistical Machine Learning Technique SMT framework BIBREF5 was used This translation framework was able to it can successfully translate the code to natural language pseudo coded text in both English and Japanese In the same year Chris Q et al mapped natural language with simple ifthisthenthat logical rules BIBREF6 Tihomir G and Viktor K developed an Integrated Development Environment IDE integrated code assistant tool anyCode for Java which can search import and call function just by typing desired functionality through text BIBREF7 They have used model and mapping framework between function signatures and utilized resources like WordNet Java Corpus relational mapping to process text online and offline Recently in 2017 P Yin and G Neubig proposed a semantic parser which generates code through its neural model BIBREF8 They formulated a grammatical model which works as a skeleton for neural network training The grammatical rules are defined based on the various generalized structure of the statements in the programming language The use of machine learning techniques such as SMT proved to be at most 75 successful in converting human text to executable code BIBREF9 A programming language is just like a language with less vocabulary compared to a typical human language For instance the code vocabulary of the training dataset was 8814 including variable function class names whereas the English vocabulary to express the same code was 13659 in total Here programming language is considered just like another human language and widely used SMT techniques have been applied SMT techniques are widely used in Natural Language Processing NLP SMT plays a significant role in translation from one language to another especially in lexical and grammatical rule extraction In SMT bilingual grammatical structures are automatically formed by statistical approaches instead of explicitly providing a grammatical model This reduces months and years of work which requires significant collaboration between bilingual linguistics Here a neural network based machine translation model is used to translate regular text into programming code SMT techniques require a parallel corpus in thr source and thr target language A textcode parallel corpus similar to Fig FIGREF12 is used in training This parallel corpus has 18805 aligned data in it In source data the expression of each line code is written in the English language In target data the code is written in Python programming language To train the neural model the texts should be converted to a computational entity To do that two separate vocabulary files are created one for the source texts and another for the code Vocabulary generation is done by tokenization of words Afterwards the words are put into their contextual vector space using the popular word2vec BIBREF10 method to make the words computational In order to train the translation model between texttocode an open source Neural Machine Translation NMT OpenNMT implementation is utilized BIBREF11 PyTorch is used as Neural Network coding framework For training three types of Recurrent Neural Network RNN layers are used an encoder layer a decoder layer and an output layer These layers together form a LSTM model LSTM is typically used in seq2seq translation In Fig FIGREF13 the neural model architecture is demonstrated The diagram shows how it takes the source and target text as input and uses it for training Vector representation of tokenized source and target text are fed into the model Each token of the source text is passed into an encoder cell Target text tokens are passed into a decoder cell Encoder cells are part of the encoder RNN layer and decoder cells are part of the decoder RNN layer End of the input sequence is marked by a eos token Upon getting the eos token the final cell state of encoder layer initiate the output layer sequence At each target cell state attention is applied with the encoder RNN state and combined with the current hidden state to produce the prediction of next target token This predictions are then fed back to the target RNN Attention mechanism helps us to overcome the fixed length restriction of encoderdecoder sequence and allows us to process variable length between input and output sequence Attention uses encoder state and pass it to the decoder cell to give particular attention to the start of an output layer sequence The encoder uses an initial state to tell the decoder what it is supposed to generate Effectively the decoder learns to generate target tokens conditioned on the input sequence Sigmoidal optimization is used to optimize the prediction Training parallel corpus had 18805 lines of annotated code in it The training model is executed several times with different training parameters During the final training process 500 validation data is used to generate the recurrent neural model which is 3 of the training data We run the training with epoch value of 10 with a batch size of 64 After finishing the training the accuracy of the generated model using validation data from the source corpus was 7440 Fig FIGREF17 Although the generated code is incoherent and often predict wrong code token this is expected because of the limited amount of training data LSTM generally requires a more extensive set of data 100k in such scenario to build a more accurate model The incoherence can be resolved by incorporating coding syntax tree model in future For instance define the method tzname with 2 arguments self and dt is translated into def init self regex The translator is successfully generating the whole codeline automatically but missing the noun part parameter and function name part of the syntax The main advantage of translating to a programming language is it has a concrete and strict lexical and grammatical structure which human languages lack The aim of this paper was to make the texttocode framework work for general purpose programming language primarily Python In later phase phrasebased word embedding can be incorporated for improved vocabulary mapping To get more accurate target code for each line Abstract Syntax TreeAST can be beneficial The contribution of this research is a machine learning model which can turn the human expression into coding expressions This paper also discusses available methods which convert natural language to programming language successfully in fixed or tightly bounded linguistic paradigm Approaching this problem using machine learning will give us the opportunity to explore the possibility of unified programming interface as well in the future We would like to thank Dr Khandaker Tabin Hasan Head of the Depertment of Computer Science American International UniversityBangladesh for his inspiration and encouragement in all of our research works Also thanks to Future Technology Conference 2019 committee for partially supporting us to join the conference and one of our colleague Faheem Abrar Software Developer for his thorough review and comments on this research work and supporting us by providing fund </s>
<s>[INST] A Survey and Taxonomy of Adversarial Neural Networks for Text-to-Image Synthesis [/INST] GANs and the variations that are now being proposed is the most interesting idea in the last 10 years in ML in my opinion 2016 Yann LeCun A picture is worth a thousand words While written text provide efficient effective and concise ways for communication visual content such as images is a more comprehensive accurate and intelligible method of information sharing and understanding Generation of images from text descriptions ie texttoimage synthesis is a complex computer vision and machine learning problem that has seen great progress over recent years Automatic image generation from natural language may allow users to describe visual elements through visuallyrich text descriptions The ability to do so effectively is highly desirable as it could be used in artificial intelligence applications such as computeraided design image editing BIBREF0 BIBREF1 game engines for the development of the next generation of video gamesBIBREF2 and pictorial art generation BIBREF3 In the early stages of research texttoimage synthesis was mainly carried out through a search and supervised learning combined process BIBREF4 as shown in Figure FIGREF4 In order to connect text descriptions to images one could use correlation between keywords or keyphrase images that identifies informative and picturable text units then these units would search for the most likely image parts conditioned on the text eventually optimizing the picture layout conditioned on both the text and the image parts Such methods often integrated multiple artificial intelligence key components including natural language processing computer vision computer graphics and machine learning The major limitation of the traditional learning based texttoimage synthesis approaches is that they lack the ability to generate new image content they can only change the characteristics of the giventraining images Alternatively research in generative models has advanced significantly and delivers solutions to learn from training images and produce new visual content For example Attribute2Image BIBREF5 models each image as a composite of foreground and background In addition a layered generative model with disentangled latent variables is learned using a variational autoencoder to generate visual content Because the learning is customizedconditioned by given attributes the generative models of Attribute2Image can generate images with respect to different attributes such as gender hair color age etc as shown in Figure FIGREF5 Although generative model based texttoimage synthesis provides much more realistic image synthesis results the image generation is still conditioned by the limited attributes In recent years several papers have been published on the subject of texttoimage synthesis Most of the contributions from these papers rely on multimodal learning approaches that include generative adversarial networks and deep convolutional decoder networks as their main drivers to generate entrancing images from text BIBREF7 BIBREF8 BIBREF9 BIBREF10 BIBREF11 First introduced by Ian Goodfellow et al BIBREF9 generative adversarial networks GANs consist of two neural networks paired with a discriminator and a generator These two models compete with one another with the generator attempting to produce syntheticfake samples that will fool the discriminator and the discriminator attempting to differentiate between real genuine and synthetic samples Because GANs adversarial training aims to cause generators to produce images similar to the real training images GANs can naturally be used to generate synthetic images image synthesis and this process can even be customized further by using text descriptions to specify the types of images to generate as shown in Figure FIGREF6 Much like texttospeech and speechtotext conversion there exists a wide variety of problems that texttoimage synthesis could solve in the computer vision field specifically BIBREF8 BIBREF12 Nowadays researchers are attempting to solve a plethora of computer vision problems with the aid of deep convolutional networks generative adversarial networks and a combination of multiple methods often called multimodal learning methods BIBREF8 For simplicity multiple learning methods will be referred to as multimodal learning hereafter BIBREF13 Researchers often describe multimodal learning as a method that incorporates characteristics from several methods algorithms and ideas This can include ideas from two or more learning approaches in order to create a robust implementation to solve an uncommon problem or improve a solution BIBREF8 BIBREF14 BIBREF15 BIBREF16 BIBREF17 black In this survey we focus primarily on reviewing recent works that aim to solve the challenge of texttoimage synthesis using generative adversarial networks GANs In order to provide a clear roadmap we propose a taxonomy to summarize reviewed GANs into four major categories Our review will elaborate the motivations of methods in each category analyze typical models their network architectures and possible drawbacks for further improvement The visual abstract of the survey and the list of reviewed GAN frameworks is shown in Figure FIGREF8 black The remainder of the survey is organized as follows Section 2 presents a brief summary of existing works on subjects similar to that of this paper and highlights the key distinctions making ours unique Section 3 gives a short introduction to GANs and some preliminary concepts related to image generation as they are the engines that make texttoimage synthesis possible and are essential building blocks to achieve photorealistic images from text descriptions Section 4 proposes a taxonomy to summarize GAN based texttoimage synthesis discusses models and architectures of novel works focused solely on texttoimage synthesis This section will also draw key contributions from these works in relation to their applications Section 5 reviews GAN based texttoimage synthesis benchmarks performance metrics and comparisons including a simple review of GANs for other applications In section 6 we conclude with a brief summary and outline ideas for future interesting developments in the field of texttoimage synthesis With the growth and success of GANs deep convolutional decoder networks and multimodal learning methods these techniques were some of the first procedures which aimed to solve the challenge of image synthesis Many engineers and scientists in computer vision and AI have contributed through extensive studies and experiments with numerous proposals and publications detailing their contributions Because GANs introduced by BIBREF9 are emerging research topics their practical applications to image synthesis are still in their infancy Recently many new GAN architectures and designs have been proposed to use GANs for different applications eg using GANs to generate sentimental texts BIBREF18 or using GANs to transform natural images into cartoons BIBREF19 Although GANs are becoming increasingly popular very few survey papers currently exist to summarize and outline contemporaneous technical innovations and contributions of different GAN architectures BIBREF20 BIBREF21 Survey papers specifically attuned to analyzing different contributions to texttoimage synthesis using GANs are even more scarce We have thus found two surveys BIBREF6 BIBREF7 on image synthesis using GANs which are the two most closely related publications to our survey objective In the following paragraphs we briefly summarize each of these surveys and point out how our objectives differ from theirs In BIBREF6 the authors provide an overview of image synthesis using GANs In this survey the authors discuss the motivations for research on image synthesis and introduce some background information on the history of GANs including a section dedicated to core concepts of GANs namely generators discriminators and the minmax game analogy and some enhancements to the original GAN model such as conditional GANs addition of variational autoencoders etc In this survey we will carry out a similar review of the background knowledge because the understanding of these preliminary concepts is paramount for the rest of the paper Three types of approaches for image generation are reviewed including direct methods single generator and discriminator hierarchical methods two or more generatordiscriminator pairs each with a different goal and iterative methods each generatordiscriminator pair generates a gradually higherresolution image Following the introduction BIBREF6 discusses methods for texttoimage and imagetoimage synthesis respectively and also describes several evaluation metrics for synthetic images including inception scores and Frechet Inception Distance FID and explains the significance of the discriminators acting as learned loss functions as opposed to fixed loss functions Different from the above survey which has a relatively broad scope in GANs our objective is heavily focused on texttoimage synthesis Although this topic texttoimage synthesis has indeed been covered in BIBREF6 they did so in a much less detailed fashion mostly listing the many different works in a timesequential order In comparison we will review several representative methods in the field and outline their models and contributions in detail Similarly to BIBREF6 the second survey paper BIBREF7 begins with a standard introduction addressing the motivation of image synthesis and the challenges it presents followed by a section dedicated to core concepts of GANs and enhancements to the original GAN model In addition the paper covers the review of two types of applications 1 unconstrained applications of image synthesis such as superresolution image inpainting etc and 2 constrained image synthesis applications namely imagetoimage texttoimage and sketchto image and also discusses image and video editing using GANs Again the scope of this paper is intrinsically comprehensive while we focus specifically on texttoimage and go into more detail regarding the contributions of novel stateoftheart models Other surveys have been published on related matters mainly related to the advancements and applications of GANs BIBREF22 BIBREF23 but we have not found any prior works which focus specifically on texttoimage synthesis using GANs To our knowledge this is the first paper to do so black In this section we first introduce preliminary knowledge of GANs and one of its commonly used variants conditional GAN ie cGAN which is the building block for many GAN based texttoimage synthesis models After that we briefly separate GAN based texttoimage synthesis into two types Simple GAN frameworks vs Advanced GAN frameworks and discuss why advanced GAN architecture for image synthesis black Notice that the simple vs advanced GAN framework separation is rather too brief our taxonomy in the next section will propose a taxonomy to summarize advanced GAN frameworks into four categories based on their objective and designs Before moving on to a discussion and analysis of works applying GANs for texttoimage synthesis there are some preliminary concepts enhancements of GANs datasets and evaluation metrics that are present in some of the works described in the next section and are thus worth introducing As stated previously GANs were introduced by Ian Goodfellow et al BIBREF9 in 2014 and consist of two deep neural networks a generator and a discriminator which are trained independently with conflicting goals The generator aims to generate samples closely related to the original data distribution and fool the discriminator while the discriminator aims to distinguish between samples from the generator model and samples from the true data distribution by calculating the probability of the sample coming from either source A conceptual view of the generative adversarial network GAN architecture is shown in Figure FIGREF11 The training of GANs is an iterative process that with each iteration updates the generator and the discriminator with the goal of each defeating the other leading each model to become increasingly adept at its specific task until a threshold is reached This is analogous to a minmax game between the two models according to the following equation In Eq DISPLAYFORM10 x denotes a multidimensional sample eg an image and z denotes a multidimensional latent space vector eg a multidimensional data point following a predefined distribution function such as that of normal distributions Dtheta d denotes a discriminator function controlled by parameters theta d which aims to classify a sample into a binary space Gtheta g denotes a generator function controlled by parameters theta g which aims to generate a sample from some latent space vector For example Gtheta gz means using a latent vector z to generate a syntheticfake image and Dtheta dx means to classify an image x as binary output ie truefalse or 10 In the GAN setting the discriminator Dtheta d is learned to distinguish a genuinetrue image labeled as 1 from fake images labeled as 0 Therefore given a true image x the ideal output from the discriminator Dtheta dx would be 1 Given a fake image generated from the generator Gtheta gz the ideal prediction from the discriminator Dtheta dGtheta gz would be 0 indicating the sample is a fake image Following the above definition the min max objective function in Eq DISPLAYFORM10 aims to learn parameters for the discriminator theta d and generator theta g to reach an optimization goal The discriminator intends to differentiate true vs fake images with maximum capability max theta d whereas the generator intends to minimize the difference between a fake image vs a true image min theta g In other words the discriminator sets the characteristics and the generator produces elements often images iteratively until it meets the attributes set forth by the discriminator GANs are often used with images and other visual elements and are notoriously efficient in generating compelling and convincing photorealistic images Most recently GANs were used to generate an original painting in an unsupervised fashion BIBREF24 The following sections go into further detail regarding how the generator and discriminator are trained in GANs Generator In image synthesis the generator network can be thought of as a mapping from one representation space latent space to another actual data BIBREF21 When it comes to image synthesis all of the images in the data space fall into some distribution in a very complex and highdimensional feature space Sampling from such a complex space is very difficult so GANs instead train a generator to create synthetic images from a much more simple feature space usually random noise called the latent space The generator network performs upsampling of the latent space and is usually a deep neural network consisting of several convolutional andor fully connected layers BIBREF21 The generator is trained using gradient descent to update the weights of the generator network with the aim of producing data in our case images that the discriminator classifies as real Discriminator The discriminator network can be thought of as a mapping from image data to the probability of the image coming from the real data space and is also generally a deep neural network consisting of several convolution andor fully connected layers However the discriminator performs downsampling as opposed to upsampling Like the generator it is trained using gradient descent but its goal is to update the weights so that it is more likely to correctly classify images as real or fake In GANs the ideal outcome is for both the generators and discriminators cost functions to converge so that the generator produces photorealistic images that are indistinguishable from real data and the discriminator at the same time becomes an expert at differentiating between real and synthetic data This however is not possible since a reduction in cost of one model generally leads to an increase in cost of the other This phenomenon makes training GANs very difficult and training them simultaneously both models performing gradient descent in parallel often leads to a stable orbit where neither model is able to converge To combat this the generator and discriminator are often trained independently In this case the GAN remains the same but there are different training stages In one stage the weights of the generator are kept constant and gradient descent updates the weights of the discriminator and in the other stage the weights of the discriminator are kept constant while gradient descent updates the weights of the generator This is repeated for some number of epochs until a desired low cost for each model is reached BIBREF25 Conditional Generative Adversarial Networks cGAN are an enhancement of GANs proposed by BIBREF26 shortly after the introduction of GANs by BIBREF9 The objective function of the cGAN is defined in Eq DISPLAYFORM13 which is very similar to the GAN objective function in Eq DISPLAYFORM10 except that the inputs to both discriminator and generator are conditioned by a class label y The main technical innovation of cGAN is that it introduces an additional input or inputs to the original GAN model allowing the model to be trained on information such as class labels or other conditioning variables as well as the samples themselves concurrently Whereas the original GAN was trained only with samples from the data distribution resulting in the generated sample reflecting the general data distribution cGAN enables directing the model to generate more tailored outputs In Figure FIGREF14 the condition vector is the class label text string Red bird which is fed to both the generator and discriminator It is important however that the condition vector is related to the real data If the model in Figure FIGREF14 was trained with the same set of real data red birds but the condition text was Yellow fish the generator would learn to create images of red birds when conditioned with the text Yellow fish Note that the condition vector in cGAN can come in many forms such as texts not just limited to the class label Such a unique design provides a direct solution to generate images conditioned by predefined specifications As a result cGAN has been used in texttoimage synthesis since the very first day of its invention although modern approaches can deliver much better texttoimage synthesis results black In order to generate images from text one simple solution is to employ the conditional GAN cGAN designs and add conditions to the training samples such that the GAN is trained with respect to the underlying conditions Several pioneer works have followed similar designs for texttoimage synthesis black An essential disadvantage of using cGAN for texttoimage synthesis is that that it cannot handle complicated textual descriptions for image generation because cGAN uses labels as conditions to restrict the GAN inputs If the text inputs have multiple keywords or long text descriptions they cannot be used simultaneously to restrict the input Instead of using text as conditions another two approaches BIBREF8 BIBREF16 use text as input features and concatenate such features with other features to train discriminator and generator as shown in Figure FIGREF15b and c To ensure text being used as GAN input a feature embedding or feature representation learning BIBREF29 BIBREF30 function varphi is often introduced to convert input text as numeric features which are further concatenated with other features to train GANs black Motivated by the GAN and conditional GAN cGAN design many GAN based frameworks have been proposed to generate images with different designs and architectures such as using multiple discriminators using progressively trained discriminators or using hierarchical discriminators Figure FIGREF17 outlines several advanced GAN frameworks in the literature In addition to these frameworks many news designs are being proposed to advance the field with rather sophisticated designs For example a recent work BIBREF37 proposes to use a pyramid generator and three independent discriminators blackeach focusing on a different aspect of the images to lead the generator towards creating images that are photorealistic on multiple levels Another recent publication BIBREF38 proposes to use discriminator to measure semantic relevance between image and text instead of class prediction like most discriminator in GANs does resulting a new GAN structure outperforming text conditioned auxiliary classifier TACGAN BIBREF16 and generating diverse realistic and relevant to the input text regardless of class black In the following section we will first propose a taxonomy that summarizes advanced GAN frameworks for texttoimage synthesis and review most recent proposed solutions to the challenge of generating photorealistic images conditioned on natural language text descriptions using GANs The solutions we discuss are selected based on relevance and quality of contributions Many publications exist on the subject of imagegeneration using GANs but in this paper we focus specifically on models for texttoimage synthesis with the review emphasizing on the model and contributions for texttoimage synthesis At the end of this section we also briefly review methods using GANs for other imagesynthesis applications black In this section we propose a taxonomy to summarize advanced GAN based texttoimage synthesis frameworks as shown in Figure FIGREF24 The taxonomy organizes GAN frameworks into four categories including Semantic Enhancement GANs Resolution Enhancement GANs Diversity Enhancement GANs and Motion Enhancement GAGs Following the proposed taxonomy each subsection will introduce several typical frameworks and address their techniques of using GANS to solve certain aspects of the texttomage synthesis challenges black Although the ultimate goal of TexttoImage synthesis is to generate images closely related to the textual descriptions the relevance of the images to the texts are often validated from different perspectives due to the inherent diversity of human perceptions For example when generating images matching to the description rose flowers some users many know the exact type of flowers they like and intend to generate rose flowers with similar colors Other users may seek to generate high quality rose flowers with a nice background eg garden The third group of users may be more interested in generating flowers similar to rose but with different colors and visual appearance eg roses begonia and peony The fourth group of users may want to not only generate flower images but also use them to form a meaningful action eg a video clip showing flower growth performing a magic show using those flowers or telling a love story using the flowers blackFrom the texttoImage synthesis point of view the first group of users intend to precisely control the semantic of the generated images and their goal is to match the texts and images at the semantic level The second group of users are more focused on the resolutions and the qualify of the images in addition to the requirement that the images and texts are semantically related For the third group of users their goal is to diversify the output images such that their images carry diversified visual appearances and are also semantically related The fourth user group adds a new dimension in image synthesis and aims to generate sequences of images which are coherent in temporal order ie capture the motion information black Based on the above descriptions we categorize GAN based TexttoImage Synthesis into a taxonomy with four major categories as shown in Fig FIGREF24 Semantic Enhancement GANs Semantic enhancement GANs represent pioneer works of GAN frameworks for texttoimage synthesis The main focus of the GAN frameworks is to ensure that the generated images are semantically related to the input texts This objective is mainly achieved by using a neural network to encode texts as dense features which are further fed to a second network to generate images matching to the texts Resolution Enhancement GANs Resolution enhancement GANs mainly focus on generating high qualify images which are semantically matched to the texts This is mainly achieved through a multistage GAN framework where the outputs from earlier stage GANs are fed to the second or later stage GAN to generate better qualify images Diversity Enhancement GANs Diversity enhancement GANs intend to diversify the output images such that the generated images are not only semantically related but also have different types and visual appearance This objective is mainly achieved through an additional component to estimate semantic relevance between generated images and texts in order to maximize the output diversity Motion Enhancement GANs Motion enhancement GANs intend to add a temporal dimension to the output images such that they can form meaningful actions with respect to the text descriptions This goal mainly achieved though a twostep process which first generates images matching to the actions of the texts followed by a mapping or alignment procedure to ensure that images are coherent in the temporal order black In the following we will introduce how these GAN frameworks evolve for texttoimage synthesis and will also review some typical methods of each category black Semantic relevance is one the of most important criteria of the texttoimage synthesis For most GNAs discussed in this survey they are required to generate images semantically related to the text descriptions However the semantic relevance is a rather subjective measure and images are inherently rich in terms of its semantics and interpretations Therefore many GANs are further proposed to enhance the texttoimage synthesis from different perspectives In this subsection we will review several classical approaches which are commonly served as texttoimage synthesis baseline black Deep convolution generative adversarial network DCGAN BIBREF8 represents the pioneer work for texttoimage synthesis using GANs Its main goal is to train a deep convolutional generative adversarial network DCGAN on text features During this process these text features are encoded by another neural network This neural network is a hybrid convolutional recurrent network at the character level Concurrently both neural networks have also feedforward inference in the way they condition text features Generating realistic images automatically from natural language text is the motivation of several of the works proposed in this computer vision field However actual artificial intelligence AI systems are far from achieving this task BIBREF8 BIBREF39 BIBREF40 BIBREF41 BIBREF42 BIBREF22 BIBREF26 Lately recurrent neural networks led the way to develop frameworks that learn discriminatively on text features At the same time generative adversarial networks GANs began recently to show some promise on generating compelling images of a whole host of elements including but not limited to faces birds flowers and noncommon images such as room interiorsBIBREF8 DCGAN is a multimodal learning model that attempts to bridge together both of the above mentioned unsupervised machine learning algorithms the recurrent neural networks RNN and generative adversarial networks GANs with the sole purpose of speeding the generation of texttoimage synthesis black Deep learning shed some light to some of the most sophisticated advances in natural language representation image synthesis BIBREF7 BIBREF8 BIBREF43 BIBREF35 and classification of generic data BIBREF44 However a bulk of the latest breakthroughs in deep learning and computer vision were related to supervised learning BIBREF8 Even though natural language and image synthesis were part of several contributions on the supervised side of deep learning unsupervised learning saw recently a tremendous rise in input from the research community specially on two subproblems textbased natural language and image synthesis BIBREF45 BIBREF14 BIBREF8 BIBREF46 BIBREF47 These subproblems are typically subdivided as focused research areas DCGANs contributions are mainly driven by these two research areas In order to generate plausible images from natural language DCGAN contributions revolve around developing a straightforward yet effective GAN architecture and training strategy that allows natural text to image synthesis These contributions are primarily tested on the CaltechUCSD Birds and Oxford102 Flowers datasets Each image in these datasets carry five text descriptions These text descriptions were created by the research team when setting up the evaluation environment The DCGANs model is subsequently trained on several subcategories Subcategories in this research represent the training and testing sub datasets The performance shown by these experiments display a promising yet effective way to generate images from textual natural language descriptions BIBREF8 black Following the pioneer DCGAN framework BIBREF8 many researches propose revised network structures eg different discriminaotrs in order to improve images with better semantic relevance to the texts Based on the deep convolutional adversarial network DCGAN network architecture GANCLS with imagetext matching discriminator GANINT learned with text manifold interpolation and GANINTCLS which combines both are proposed to find semantic match between text and image Similar to the DCGAN architecture an adaptive loss function ie Perceptual Loss BIBREF48 is proposed for semantic image synthesis which can synthesize a realistic image that not only matches the target text description but also keep the irrelavant featureseg background from source images BIBREF49 Regarding to the Perceptual Losses three loss functions ie Pixel reconstruction loss Activation reconstruction loss and Texture reconstruction loss are proposed in BIBREF50 in which they construct the network architectures based on the DCGAN ie GANINTCLSPixel GANINTCLSVGG and GANINTCLSGram with respect to three losses In BIBREF49 a residual transformation unit is added in the network to retain similar structure of the source image black Following the BIBREF49 and considering the features in early layers address background while foreground is obtained in latter layers in CNN a pair of discriminators with different architectures ie PairedD GAN is proposed to synthesize background and foreground from a source image seperately BIBREF51 Meanwhile the skipconnection in the generator is employed to more precisely retain background information in the source image black When synthesising images most texttoimage synthesis methods consider each output image as one single unit to characterize its semantic relevance to the texts This is likely problematic because most images naturally consist of two crucial components foreground and background Without properly separating these two components its hard to characterize the semantics of an image if the whole image is treated as a single unit without proper separation black In order to enhance the semantic relevance of the images a multiconditional GAN MCGAN BIBREF52 is proposed to synthesize a target image by combining the background of a source image and a textdescribed foreground object which does not exist in the source image A unique feature of MCGAN is that it proposes a synthesis block in which the background feature is extracted from the given image without nonlinear function ie only using convolution and batch normalization and the foreground feature is the feature map from the previous layer black Because MCGAN is able to properly model the background and foreground of the generated images a unique strength of MCGAN is that users are able to provide a base image and MCGAN is able to preserve the background information of the base image to generate new images black Due to the fact that training GANs will be much difficult when generating highresolution images a two stage GAN ie stackGAN is proposed in which rough imagesie lowresolution images are generated in stageI and refined in stageII To further improve the quality of generated images the second version of StackGAN ie Stack is proposed to use multistage GANs to generate multiscale images A colorconsistency regularization term is also added into the loss to keep the consistency of images in different scales black While stackGAN and StackGAN are both built on the global sentence vector AttnGAN is proposed to use attention mechanism ie Deep Attentional Multimodal Similarity Model DAMSM to model the multilevel information ie word level and sentence level into GANs In the following StackGAN StackGAN and AttnGAN will be explained in detail black Recently Dynamic Memory Generative Adversarial Network ie DMGANBIBREF53 which uses a dynamic memory component is proposed to focus on refiningthe initial generated image which is the key to the success of generating high quality images In 2017 Zhang et al proposed a model for generating photorealistic images from text descriptions called StackGAN Stacked Generative Adversarial Network BIBREF33 In their work they define a twostage model that uses two cascaded GANs each corresponding to one of the stages The stage I GAN takes a text description as input converts the text description to a text embedding containing several conditioning variables and generates a lowquality 64x64 image with rough shapes and colors based on the computed conditioning variables The stage II GAN then takes this lowquality stage I image as well as the same text embedding and uses the conditioning variables to correct and add more detail to the stage I result The output of stage II is a photorealistic 256times256 image that resembles the text description with compelling accuracy One major contribution of StackGAN is the use of cascaded GANs for texttoimage synthesis through a sketchrefinement process By conditioning the stage II GAN on the image produced by the stage I GAN and text description the stage II GAN is able to correct defects in the stage I output resulting in highquality 256x256 images Prior works have utilized stacked GANs to separate the image generation process into structure and style BIBREF42 multiple stages each generating lowerlevel representations from higherlevel representations of the previous stage BIBREF35 and multiple stages combined with a laplacian pyramid approach BIBREF54 which was introduced for image compression by P Burt and E Adelson in 1983 and uses the differences between consecutive downsamples of an original image to reconstruct the original image from its downsampled version BIBREF55 However these works did not use text descriptions to condition their generator models Conditioning Augmentation is the other major contribution of StackGAN Prior works transformed the natural language text description into a fixed text embedding containing static conditioning variables which were fed to the generator BIBREF8 StackGAN does this and then creates a Gaussian distribution from the text embedding and randomly selects variables from the Gaussian distribution to add to the set of conditioning variables during training This encourages robustness by introducing small variations to the original text embedding for a particular training image while keeping the training image that the generated output is compared to the same The result is that the trained model produces more diverse images in the same distribution when using Conditioning Augmentation than the same model using a fixed text embedding BIBREF33 Proposed by the same users as StackGAN StackGAN is also a stacked GAN model but organizes the generators and discriminators in a treelike structure BIBREF47 with multiple stages The first stage combines a noise vector and conditioning variables with Conditional Augmentation introduced in BIBREF33 for input to the first generator which generates a lowresolution image 64times 64 by default this can be changed depending on the desired number of stages Each following stage uses the result from the previous stage and the conditioning variables to produce gradually higherresolution images These stages do not use the noise vector again as the creators assume that the randomness it introduces is already preserved in the output of the first stage The final stage produces a 256times 256 highquality image StackGAN introduces the joint conditional and unconditional approximation in their designs BIBREF47 The discriminators are trained to calculate the loss between the image produced by the generator and the conditioning variables measuring how accurately the image represents the description as well as the loss between the image and real images probability of the image being real or fake The generators then aim to minimize the sum of these losses improving the final result Attentional Generative Adversarial Network AttnGAN BIBREF10 is very similar in terms of its structure to StackGAN BIBREF47 discussed in the previous section but some novel components are added Like previous works BIBREF56 BIBREF8 BIBREF33 BIBREF47 a text encoder generates a text embedding with conditioning variables based on the overall sentence Additionally the text encoder generates a separate text embedding with conditioning variables based on individual words This process is optimized to produce meaningful variables using a bidirectional recurrent neural network BRNN more specifically bidirectional Long Short Term Memory LSTM BIBREF57 which for each word in the description generates conditions based on the previous word as well as the next word bidirectional The first stage of AttnGAN generates a lowresolution image based on the sentencelevel text embedding and random noise vector The output is fed along with the wordlevel text embedding to an attention model which matches the wordlevel conditioning variables to regions of the stage I image producing a wordcontext matrix This is then fed to the next stage of the model along with the raw previous stage output Each consecutive stage works in the same manner but produces gradually higherresolution images conditioned on the previous stage Two major contributions were introduced in AttnGAN the attentional generative network and the Deep Attentional Multimodal Similarity Model DAMSM BIBREF47 The attentional generative network matches specific regions of each stages output image to conditioning variables from the wordlevel text embedding This is a very worthy contribution allowing each consecutive stage to focus on specific regions of the image independently adding attentional details region by region as opposed to the whole image The DAMSM is also a key feature introduced by AttnGAN which is used after the result of the final stage to calculate the similarity between the generated image and the text embedding at both the sentence level and the more finegrained word level Table TABREF48 shows scores from different metrics for StackGAN StackGAN AttnGAN and HDGAN on the CUB Oxford and COCO datasets The table shows that AttnGAN outperforms the other models in terms of IS on the CUB dataset by a small amount and greatly outperforms them on the COCO dataset Hierarchicallynested adversarial network HDGAN is a method proposed by BIBREF36 and its main objective is to tackle the difficult problem of dealing with photographic images from semantic text descriptions These semantic text descriptions are applied on images from diverse datasets This method introduces adversarial objectives nested inside hierarchically oriented networks BIBREF36 Hierarchical networks helps regularize midlevel manifestations In addition to regularize midlevel manifestations it assists the training of the generator in order to capture highly complex still media elements These elements are captured in statistical order to train the generator based on settings extracted directly from the image The latter is an ideal scenario However this paper aims to incorporate a singlestream architecture This singlestream architecture functions as the generator that will form an optimum adaptability towards the jointed discriminators Once jointed discriminators are setup in an optimum manner the singlestream architecture will then advance generated images to achieve a much higher resolution BIBREF36 The main contributions of the HDGANs include the introduction of a visualsemantic similarity measure BIBREF36 This feature will aid in the evaluation of the consistency of generated images In addition to checking the consistency of generated images one of the key objectives of this step is to test the logical consistency of the end product BIBREF36 The end product in this case would be images that are semantically mapped from textbased natural language descriptions to each area on the picture eg a wing on a bird or petal on a flower Deep learning has created a multitude of opportunities and challenges for researchers in the computer vision AI field Coupled with GAN and multimodal learning architectures this field has seen tremendous growth BIBREF8 BIBREF39 BIBREF40 BIBREF41 BIBREF42 BIBREF22 BIBREF26 Based on these advancements HDGANs attempt to further extend some desirable and less common features when generating images from textual natural language BIBREF36 In other words it takes sentences and treats them as a hierarchical structure This has some positive and negative implications in most cases For starters it makes it more complex to generate compelling images However one of the key benefits of this elaborate process is the realism obtained once all processes are completed In addition one common feature added to this process is the ability to identify parts of sentences with bounding boxes If a sentence includes common characteristics of a bird it will surround the attributes of such bird with bounding boxes In practice this should happen if the desired image have other elements such as human faces eg eyes hair etc flowers eg petal size color etc or any other inanimate object eg a table a mug etc Finally HDGANs evaluated some of its claims on common ideal texttoimage datasets such as CUB COCO and Oxford102 BIBREF8 BIBREF36 BIBREF39 BIBREF40 BIBREF41 BIBREF42 BIBREF22 BIBREF26 These datasets were first utilized on earlier works BIBREF8 and most of them sport modified features such image annotations labels or descriptions The qualitative and quantitative results reported by researchers in this study were far superior of earlier works in this same field of computer vision AI black In this subsection we introduce texttoimage synthesis methods which try to maximize the diversity of the output images based on the text descriptions black Two issues arise in the traditional GANs BIBREF58 for image synthesis 1 scalabilirty problem traditional GANs cannot predict a large number of image categories and 2 diversity problem images are often subject to onetomany mapping so one image could be labeled as different tags or being described using different texts To address these problems GAN conditioned on additional information eg cGAN is an alternative solution However although cGAN and many previously introduced approaches are able to generate images with respect to the text descriptions they often output images with similar types and visual appearance black Slightly different from the cGAN auxiliary classifier GANs ACGAN BIBREF27 proposes to improve the diversity of output images by using an auxiliary classifier to control output images The overall structure of ACGAN is shown in Fig FIGREF15c In ACGAN every generated image is associated with a class label in addition to the truefake label which are commonly used in GAN or cGAN The discriminator of ACGAN not only outputs a probability distribution over sources ie whether the image is true or fake it also output a probability distribution over the class label ie predict which class the image belong to black By using an auxiliary classifier layer to predict the class of the image ACGAN is able to use the predicted class labels of the images to ensure that the output consists of images from different classes resulting in diversified synthesis images The results show that ACGAN can generate images with high diversity black Building on the ACGAN TACGAN BIBREF59 is proposed to replace the class information with textual descriptions as the input to perform the task of text to image synthesis The architecture of TACGAN is shown in Fig FIGREF15d which is similar to ACGAN Overall the major difference between TACGAN and ACGAN is that TACGAN conditions the generated images on text descriptions instead of on a class label This design makes TACGAN more generic for image synthesis black For TACGAN it imposes restrictions on generated images in both texts and class labels The input vector of TACGANs generative network is built based on a noise vector and embedded vector representation of textual descriptions The discriminator of TACGAN is similar to that of the ACGAN which not only predicts whether the image is fake or not but also predicts the label of the images A minor difference of TACGANs discriminator compared to that of the ACGAN is that it also receives text information as input before performing its classification black The experiments and validations on the Oxford102 flowers dataset show that the results produced by TACGAN are slightly better that other approaches including GANINTCLS and StackGAN black In order to improve the diversity of the output images both ACGAN and TACGANs discriminators predict class labels of the synthesised images This process likely enforces the semantic diversity of the images but class labels are inherently restrictive in describing image semantics and images described by text can be matched to multiple labels Therefore instead of predicting images class labels an alternative solution is to directly quantify their semantic relevance black The architecture of TextSeGAN is shown in Fig FIGREF15e In order to directly quantify semantic relevance TextSeGAN BIBREF28 adds a regression layer to estimate the semantic relevance between the image and text instead of a classifier layer of predicting labels The estimated semantic reference is a fractional value ranging between 0 and 1 with a higher value reflecting better semantic relevance between the image and text Due to this unique design an inherent advantage of TextSeGAN is that the generated images are not limited to certain classes and are semantically matching to the text input black Experiments and validations on Oxford102 flower dataset show that TextSeGAN can generate diverse images that are semantically relevant to the input text In addition the results of TextSeGAN show improved inception score compared to other approaches including GANINTCLS StackGAN TACGAN and HDGAN black Due to the inherent complexity of the visual images and the diversity of text descriptions ie same words could imply different meanings it is difficulty to precisely match the texts to the visual images at the semantic levels For most methods we have discussed so far they employ a direct text to image generation process but there is no validation about how generated images comply with the text in a reverse fashion black To ensure the semantic consistency and diversity MirrorGAN BIBREF60 employs a mirror structure which reversely learns from generated images to output texts an imagetotext process to further validate whether generated are indeed consistent to the input texts MirrowGAN includes three modules a semantic text embedding module STEM a globallocal collaborative attentive module for cascaded image generation GLAM and a semantic text regeneration and alignment module STREAM The back to back TexttoImage T2I and ImagetoText I2T are combined to progressively enhance the diversity and semantic consistency of the generated images black In order to enhance the diversity of the output image Scene Graph GAN BIBREF61 proposes to use visual scene graphs to describe the layout of the objects allowing users to precisely specific the relationships between objects in the images In order to convert the visual scene graph as input for GAN to generate images this method uses graph convolution to process input graphs It computes a scene layout by predicting bounding boxes and segmentation masks for objects After that it converts the computed layout to an image with a cascaded renement network black Instead of focusing on generating static images another line of texttoimage synthesis research focuses on generating videos ie sequences of images from texts In this context the synthesised videos are often useful resources for automated assistance or story telling black One earlyinteresting work of motion enhancement GANs is to generate spoofed speech and lipsync videos or talking face of Barack Obama ie ObamaNet based on text input BIBREF62 This framework is consisted of three parts ie text to speech using Char2Wav mouth shape representation synced to the audio using a timedelayed LSTM and video generation conditioned on the mouth shape using UNet architecture Although the results seem promising ObamaNet only models the mouth region and the videos are not generated from noise which can be regarded as video prediction other than video generation black Another meaningful trial of using synthesised videos for automated assistance is to translate spoken language eg text into sign language video sequences ie T2S BIBREF63 This is often achieved through a two step process converting texts as meaningful units to generate images followed by a learning component to arrange images into sequential order for best representation More specifically using RNN based machine translation methods texts are translated into sign language gloss sequences Then glosses are mapped to skeletal pose sequences using a lookuptable To generate videos a conditional DCGAN with the input of concatenation of latent representation of the image for a base pose and skeletal pose information is built black In BIBREF64 a texttovideo model T2V is proposed based on the cGAN in which the input is the isometric Gaussian noise with the textgist vector served as the generator A key component of generating videos from text is to train a conditional generative model to extract both static and dynamic information from text followed by a hybrid framework combining a Variational Autoencoder VAE and a Generative Adversarial Network GAN black More specifically T2V relies on two types of features static features and dynamic features to generate videos Static features called gist are used to sketch textconditioned background color and object layout structure Dynamic features on the other hand are considered by transforming input text into an image filter which eventually forms the video generator which consists of three entangled neural networks The textgist vector is generated by a gist generator which maintains static information eg background and a text2filter which captures the dynamic information ie actions in the text to generate videos black As demonstrated in the paper BIBREF64 the generated videos are semantically related to the texts but have a rather low quality eg only 64 times 64 resolution black Different from T2V which generates videos from a single text StoryGAN aims to produce dynamic scenes consistent of specified texts ie story written in a multisentence paragraph using a sequential GAN model BIBREF65 Story encoder context encoder and discriminators are the main components of this model By using stochastic sampling the story encoder intends to learn an lowdimensional embedding vector for the whole story to keep the continuity of the story The context encoder is proposed to capture contextual information during sequential image generation based on a deep RNN Two discriminators of StoryGAN are image discriminator which evaluates the generated images and story discriminator which ensures the global consistency black The experiments and comparisons on CLEVR dataset and Pororo cartoon dataset which are originally used for visual question answering show that StoryGAN improves the generated video qualify in terms of Structural Similarity Index SSIM visual qualify consistence and relevance the last three measure are based on human evaluation Computer vision applications have strong potential for industries including but not limited to the medical government military entertainment and online social media fields BIBREF7 BIBREF66 BIBREF67 BIBREF68 BIBREF69 BIBREF70 Texttoimage synthesis is one such application in computer vision AI that has become the main focus in recent years due to its potential for providing beneficial properties and opportunities for a wide range of applicable areas Texttoimage synthesis is an application byproduct of deep convolutional decoder networks in combination with GANs BIBREF7 BIBREF8 BIBREF10 Deep convolutional networks have contributed to several breakthroughs in image video speech and audio processing This learning method intends among other possibilities to help translate sequential text descriptions to images supplemented by one or many additional methods Algorithms and methods developed in the computer vision field have allowed researchers in recent years to create realistic images from plain sentences Advances in the computer vision deep convolutional nets and semantic units have shined light and redirected focus to this research area of texttoimage synthesis having as its prime directive to aid in the generation of compelling images with as much fidelity to text descriptions as possible To date models for generating synthetic images from textual natural language in research laboratories at universities and private companies have yielded compelling images of flowers and birds BIBREF8 Though flowers and birds are the most common objects studied thus far research has been applied to other classes as well For example there have been studies focused solely on human faces BIBREF7 BIBREF8 BIBREF71 BIBREF72 Its a fascinating time for computer vision AI and deep learning researchers and enthusiasts The consistent advancement in hardware software and contemporaneous development of computer vision AI research disrupts multiple industries These advances in technology allow for the extraction of several data types from a variety of sources For example image data captured from a variety of photoready devices such as smartphones and online social media services opened the door to the analysis of large amounts of media datasets BIBREF70 The availability of large media datasets allow new frameworks and algorithms to be proposed and tested on realworld data A summary of some reviewed methods and benchmark datasets used for validation is reported in Table TABREF43 In addition the performance of different GANs with respect to the benchmark datasets and performance metrics is reported in Table TABREF48 In order to synthesize images from text descriptions many frameworks have taken a minimalistic approach by creating small and backgroundless images BIBREF73 In most cases the experiments were conducted on simple datasets initially containing images of birds and flowers BIBREF8 contributed to these data sets by adding corresponding natural language text descriptions to subsets of the CUB MSCOCO and Oxford102 datasets which facilitated the work on texttoimage synthesis for several papers released more recently While most deep learning algorithms use MNIST BIBREF74 dataset as the benchmark there are three main datasets that are commonly used for evaluation of proposed GAN models for texttoimage synthesis CUB BIBREF75 Oxford BIBREF76 COCO BIBREF77 and CIFAR10 BIBREF78 CUB BIBREF75 contains 200 birds with matching text descriptions and Oxford BIBREF76 contains 102 categories of flowers with 40258 images each and matching text descriptions These datasets contain individual objects with the text description corresponding to that object making them relatively simple COCO BIBREF77 is much more complex containing 328k images with 91 different object types CIFAI10 BIBREF78 dataset consists of 60000 32times32 colour images in 10 classes with 6000 images per class In contrast to CUB and Oxford whose images each contain an individual object COCOs images may contain multiple objects each with a label so there are many labels per image The total number of labels over the 328k images is 25 million BIBREF77 Several evaluation metrics are used for judging the images produced by texttoimage GANs Proposed by BIBREF25 Inception Scores IS calculates the entropy randomness of the conditional distribution obtained by applying the Inception Model introduced in BIBREF79 and marginal distribution of a large set of generated images which should be low and high respectively for meaningful images Low entropy of conditional distribution means that the evaluator is confident that the images came from the data distribution and high entropy of the marginal distribution means that the set of generated images is diverse which are both desired features The IS score is then computed as the KLdivergence between the two entropies FCNscores BIBREF2 are computed in a similar manner relying on the intuition that realistic images generated by a GAN should be able to be classified correctly by a classifier trained on real images of the same distribution Therefore if the FCN classifier classifies a set of synthetic images accurately the image is probably realistic and the corresponding GAN gets a high FCN score Frechet Inception Distance FID BIBREF80 is the other commonly used evaluation metric and takes a different approach actually comparing the generated images to real images in the distribution A high FID means there is little relationship between statistics of the synthetic and real images and vice versa so lower FIDs are better black The performance of different GANs with respect to the benchmark datasets and performance metrics is reported in Table TABREF48 In addition Figure FIGREF49 further lists the performance of 14 GANs with respect to their Inception Scores IS While we gathered all the data we could find on scores for each model on the CUB Oxford and COCO datasets using IS FID FCN and human classifiers we unfortunately were unable to find certain data for AttnGAN and HDGAN missing in Table TABREF48 The best evaluation we can give for those with missing data is our own opinions by looking at examples of generated images provided in their papers In this regard we observed that HDGAN produced relatively better visual results on the CUB and Oxford datasets while AttnGAN produced far more impressive results than the rest on the more complex COCO dataset This is evidence that the attentional model and DAMSM introduced by AttnGAN are very effective in producing highquality images Examples of the best results of birds and plates of vegetables generated by each model are presented in Figures FIGREF50 and FIGREF51 respectively blackIn terms of inception score IS which is the metric that was applied to majority models except DCGAN the results in Table TABREF48 show that StackGAN only showed slight improvement over its predecessor StackGAN for texttoimage synthesis However StackGAN did introduce a very worthy enhancement for unconditional image generation by organizing the generators and discriminators in a treelike structure This indicates that revising the structures of the discriminators andor generators can bring a moderate level of improvement in texttoimage synthesis blackIn addition the results in Table TABREF48 also show that DMGAN BIBREF53 has the best performance followed by ObjGAN BIBREF81 Notice that both DMGAN and ObjGAN are most recently developed methods in the field both published in 2019 indicating that research in text to image synthesis is continuously improving the results for better visual perception and interception Technical wise DMGAN BIBREF53 is a model using dynamic memory to refine fuzzy image contents initially generated from the GAN networks A memory writing gate is used for DMGAN to select important text information and generate images based on he selected text accordingly On the other hand ObjGAN BIBREF81 focuses on object centered texttoimage synthesis The proposed framework of ObjGAN consists of a layout generation including a bounding box generator and a shape generator and an objectdriven attentive image generator The designs and advancement of DMGAN and ObjGAN indicate that research in texttoimage synthesis is advancing to put more emphasis on the image details and text semantics for better understanding and perception It is worth noting that although this survey mainly focuses on texttoimage synthesis there have been other applications of GANs in broader image synthesis field that we found fascinating and worth dedicating a small section to For example BIBREF72 used SemLatent GANs to generate images of faces based on facial attributes producing impressive results that at a glance could be mistaken for real faces BIBREF82 BIBREF70 and BIBREF83 demonstrated great success in generating text descriptions from images image captioning with great accuracy with BIBREF82 using an attentionbased model that automatically learns to focus on salient objects and BIBREF83 using deep visualsemantic alignments Finally there is a contribution made by StackGAN that was not mentioned in the dedicated section due to its relation to unconditional image generation as opposed to conditional namely a colorregularization term BIBREF47 This additional term aims to keep the samples generated from the same input at different stages more consistent in color which resulted in significantly better results for the unconditional model The recent advancement in texttoimage synthesis research opens the door to several compelling methods and architectures The main objective of texttoimage synthesis initially was to create images from simple labels and this objective later scaled to natural languages In this paper we reviewed novel methods that generate in our opinion the most visuallyrich and photorealistic images from textbased natural language These generated images often rely on generative adversarial networks GANs deep convolutional decoder networks and multimodal learning methods blackIn the paper we first proposed a taxonomy to organize GAN based texttoimage synthesis frameworks into four major groups semantic enhancement GANs resolution enhancement GANs diversity enhancement GANs and motion enhancement GANs The taxonomy provides a clear roadmap to show the motivations architectures and difference of different methods and also outlines their evolution timeline and relationships Following the proposed taxonomy we reviewed important features of each method and their architectures We indicated the model definition and key contributions from some advanced GAN framworks including StackGAN StackGAN AttnGAN DCGAN ACGAN TACGAN HDGAN TextSeGAn StoryGAN etc Many of the solutions surveyed in this paper tackled the highly complex challenge of generating photorealistic images beyond swatch size samples In other words beyond the work of BIBREF8 in which images were generated from text in 64times 64 tiny swatches Lastly all methods were evaluated on datasets that included birds flowers humans and other miscellaneous elements We were also able to allocate some important papers that were as impressive as the papers we finally surveyed Though these notable papers have yet to contribute directly or indirectly to the expansion of the vast computer vision AI field Looking into the future an excellent extension from the works surveyed in this paper would be to give more independence to the several learning methods eg less human intervention involved in the studies as well as increasing the size of the output images The authors declare that there is no conflict of interest regarding the publication of this article </s>
<s>[INST] Gating Mechanisms for Combining Character and Word-level Word Representations: An Empirical Study [/INST] Incorporating subword structures like substrings morphemes and characters to the creation of word representations significantly increases their quality as reflected both by intrinsic metrics and performance in a wide range of downstream tasks BIBREF0 BIBREF1 BIBREF2 BIBREF3 The reason for this improvement is related to subword structures containing information that is usually ignored by standard wordlevel models Indeed when representing words as vectors extracted from a lookup table semantically related words resulting from inflectional processes such as surf surfing and surfed are treated as being independent from one another Further wordlevel embeddings do not account for derivational processes resulting in syntacticallysimilar words with different meanings such as break breakable and unbreakable This causes derived words which are usually less frequent to have lowerquality or no vector representations Previous works have successfully combined characterlevel and wordlevel word representations obtaining overall better results than using only wordlevel representations For example BIBREF1 achieved stateoftheart results in a machine translation task by representing unknown words as a composition of their characters BIBREF4 created word representations by adding the vector representations of the words surface forms and their morphemes INLINEFORM0 obtaining significant improvements on intrinsic evaluation tasks word similarity and machine translation BIBREF5 concatenated characterlevel and wordlevel representations for creating word representations and then used them as input to their models for obtaining stateoftheart results in Named Entity Recognition on several languages What these works have in common is that the models they describe first learn how to represent subword information at character BIBREF1 morpheme BIBREF4 or substring BIBREF0 levels and then combine these learned representations at the word level The incorporation of information at a finergrained hierarchy results in higherquality modeling of rare words morphological processes and semantics BIBREF6 There is no consensus however on which combination method works better in which case or how the choice of a combination method affects downstream performance either measured intrinsically at the word level or extrinsically at the sentence level In this paper we aim to provide some intuitions about how the choice of mechanism for combining characterlevel with wordlevel representations influences the quality of the final word representations and the subsequent effect these have in the performance of downstream tasks Our contributions are as follows We are interested in studying different ways of combining word representations obtained from different hierarchies into a single word representation Specifically we want to study how combining word representations 1 taken directly from a word embedding lookup table and 2 obtained from a function over the characters composing them affects the quality of the final word representations Let INLINEFORM0 be a set or vocabulary of words with INLINEFORM1 elements and INLINEFORM2 a vocabulary of characters with INLINEFORM3 elements Further let INLINEFORM4 be a sequence of words and INLINEFORM5 be the sequence of characters composing INLINEFORM6 Each token INLINEFORM7 can be represented as a vector INLINEFORM8 extracted directly from an embedding lookup table INLINEFORM9 pretrained or otherwise and as a vector INLINEFORM10 built from the characters that compose it in other words INLINEFORM11 where INLINEFORM12 is a function that maps a sequence of characters to a vector The methods for combining word and characterlevel representations we study are of the form INLINEFORM0 where INLINEFORM1 is the final word representation The function INLINEFORM0 is composed of an embedding layer an optional context function and an aggregation function The embedding layer transforms each character INLINEFORM0 into a vector INLINEFORM1 of dimension INLINEFORM2 by directly taking it from a trainable embedding lookup table INLINEFORM3 We define the matrix representation of word INLINEFORM4 as INLINEFORM5 The context function takes INLINEFORM0 as input and returns a contextenriched matrix representation INLINEFORM1 in which each INLINEFORM2 contains a measure of information about its context and interactions with its neighbors In particular we chose to do this by feeding INLINEFORM3 to a BiLSTM BIBREF7 BIBREF8 Informally we can think of LSTM BIBREF10 as a function INLINEFORM0 that takes a matrix INLINEFORM1 as input and returns a contextenriched matrix representation INLINEFORM2 where each INLINEFORM3 encodes information about the previous elements INLINEFORM4 A BiLSTM is simply composed of 2 LSTM one that reads the input from left to right forward and another that does so from right to left backward The output of the forward and backward LSTM are INLINEFORM0 and INLINEFORM1 respectively In the backward case the LSTM reads INLINEFORM2 first and INLINEFORM3 last therefore INLINEFORM4 will encode the context from INLINEFORM5 The aggregation function takes the contextenriched matrix representation of word INLINEFORM0 for both directions INLINEFORM1 and INLINEFORM2 and returns a single vector INLINEFORM3 To do so we followed BIBREF11 and defined the characterlevel representation INLINEFORM4 of word INLINEFORM5 as the linear combination of the forward and backward last hidden states returned by the context function DISPLAYFORM0 where INLINEFORM0 and INLINEFORM1 are trainable parameters and INLINEFORM2 represents the concatenation operation between two vectors We tested three different methods for combining INLINEFORM0 with INLINEFORM1 simple concatenation a learned scalar gate BIBREF11 and a learned vector gate also referred to as featurewise sigmoidal gate Additionally we compared these methods to two baselines using pretrained word vectors only and using characteronly features for representing words See figmethods for a visual description of the proposed methods wordonly w considers only INLINEFORM0 and ignores INLINEFORM1 DISPLAYFORM0 charonly c considers only INLINEFORM0 and ignores INLINEFORM1 DISPLAYFORM0 concat cat concatenates both word and characterlevel representations DISPLAYFORM0 scalar gate sg implements the scalar gating mechanism described by BIBREF11 DISPLAYFORM0 where INLINEFORM0 and INLINEFORM1 are trainable parameters INLINEFORM2 and INLINEFORM3 is the sigmoid function vector gate vg DISPLAYFORM0 where INLINEFORM0 and INLINEFORM1 are trainable parameters INLINEFORM2 INLINEFORM3 is the elementwise sigmoid function INLINEFORM4 is the elementwise product for vectors and INLINEFORM5 is a vector of ones The vector gate is inspired by BIBREF11 and BIBREF12 but is different to the former in that the gating mechanism acts upon each dimension of the word and characterlevel vectors and different to the latter in that it does not rely on external sources of information for calculating the gating mechanism Finally note that word only and char only are special cases of both gating mechanisms INLINEFORM0 scalar gate and INLINEFORM1 vector gate correspond to word only INLINEFORM2 and INLINEFORM3 correspond to char only To enable sentencelevel classification we need to obtain a sentence representation from the word vectors INLINEFORM0 We achieved this by using a BiLSTM with max pooling which was shown to be a good universal sentence encoding mechanism BIBREF13 Let INLINEFORM0 be an input sentence and INLINEFORM1 its matrix representation where each INLINEFORM2 was obtained by one of the methods described in subsecmethods INLINEFORM3 is the contextenriched matrix representation of INLINEFORM4 obtained by feeding INLINEFORM5 to a BiLSTM of output dimension INLINEFORM6 Lastly INLINEFORM11 is the final sentence representation of INLINEFORM12 obtained by maxpooling INLINEFORM13 along the sequence dimension Finally we initialized the word representations INLINEFORM0 using GloVe embeddings BIBREF14 and finetuned them during training Refer to apphyperparams for details on the other hyperparameters we used We trained our models for solving the Natural Language Inference NLI task in two datasets SNLI BIBREF15 and MultiNLI BIBREF16 and validated them in each corresponding development set including the matched and mismatched development sets of MultiNLI For each datasetmethod combination we trained 7 models initialized with different random seeds and saved each when it reached its best validation accuracy We then evaluated the quality of each trained models word representations INLINEFORM0 in 10 word similarity tasks using the system created by BIBREF17 Finally we fed these obtained word vectors to a BiLSTM with maxpooling and evaluated the final sentence representations in 11 downstream transfer tasks BIBREF13 BIBREF18 Wordlevel Semantic Similarity A desirable property of vector representations of words is that semantically similar words should have similar vector representations Assessing whether a set of word representations possesses this quality is referred to as the semantic similarity task This is the most widelyused evaluation method for evaluating word representations despite its shortcomings BIBREF20 This task consists of comparing the similarity between word vectors measured by a distance metric usually cosine distance with a similarity score obtained from human judgements High correlation between these similarities is an indicator of good performance A problem with this formulation though is that the definition of similarity often confounds the meaning of both similarity and relatedness For example cup and tea are related but dissimilar words and this type of distinction is not always clear BIBREF21 BIBREF22 To face the previous problem we tested our methods in a wide variety of datasets including some that explicitly model relatedness WS353R some that explicitly consider similarity WS353S SimLex999 SimVerb3500 and some where the distinction is not clear MEN MTurk287 MTurk771 RG WS353 We also included the RareWords RW dataset for evaluating the quality of rare word representations See appendixdatasets for a more complete description of the datasets we used Sentencelevel Evaluation Tasks Unlike wordlevel representations there is no consensus on the desirable properties sentence representations should have In response to this BIBREF13 created SentEval a sentence representation evaluation benchmark designed for assessing how well sentence representations perform in various downstream tasks BIBREF23 Some of the datasets included in SentEval correspond to sentiment classification CR MPQA MR SST2 and SST5 subjectivity classification SUBJ questiontype classification TREC recognizing textual entailment SICK E estimating semantic relatedness SICK R and measuring textual semantic similarity STS16 STSB The datasets are described by BIBREF13 and we provide pointers to their original sources in the appendix tablesentenceevaldatasets To evaluate these sentence representations SentEval trained a linear model on top of them and evaluated their performance in the validation sets accompanying each dataset The only exception was the STS16 task in which our representations were evaluated directly tablewordlevelresults shows the quality of word representations in terms of the correlation between word similarity scores obtained by the proposed models and word similarity scores defined by humans First we can see that for each task character only models had significantly worse performance than every other model trained on the same dataset The most likely explanation for this is that these models are the only ones that need to learn word representations from scratch since they have no access to the global semantic knowledge encoded by the GloVe embeddings Further bold results show the overall trend that vector gates outperformed the other methods regardless of training dataset This implies that learning how to combine character and wordlevel representations at the dimension level produces word vector representations that capture a notion of word similarity and relatedness that is closer to that of humans Additionally results from the MNLI row in general and underlined results in particular show that training on MultiNLI produces word representations better at capturing word similarity This is probably due to MultiNLI data being richer than that of SNLI Indeed MultiNLI data was gathered from various sources novels reports letters and telephone conversations among others rather than the single image captions dataset from which SNLI was created Exceptions to the previous rule are models evaluated in MEN and RW The former case can be explained by the MEN dataset containing only words that appear as image labels in the ESPGame and MIRFLICKR1M image datasets BIBREF24 and therefore having data that is more closely distributed to SNLI than to MultiNLI More notably in the RareWords dataset BIBREF25 the word only concat and scalar gate methods performed equally despite having been trained in different datasets INLINEFORM0 and the char only method performed significantly worse when trained in MultiNLI The vector gate however performed significantly better than its counterpart trained in SNLI These facts provide evidence that this method is capable of capturing linguistic phenomena that the other methods are unable to model tablewordsimilaritydataset lists the wordsimilarity datasets and their corresponding reference As mentioned in subsecdatasets all the wordsimilarity datasets contain pairs of words annotated with similarity or relatedness scores although this difference is not always explicit Below we provide some details for each MEN contains 3000 annotated word pairs with integer scores ranging from 0 to 50 Words correspond to image labels appearing in the ESPGame and MIRFLICKR1M image datasets MTurk287 contains 287 annotated pairs with scores ranging from 10 to 50 It was created from words appearing in both DBpedia and in news articles from The New York Times MTurk771 contains 771 annotated pairs with scores ranging from 10 to 50 with words having synonymy holonymy or meronymy relationships sampled from WordNet BIBREF56 RG contains 65 annotated pairs with scores ranging from 00 to 40 representing similarity of meaning RW contains 2034 pairs of words annotated with similarity scores in a scale from 0 to 10 The words included in this dataset were obtained from Wikipedia based on their frequency and later filtered depending on their WordNet synsets including synonymy hyperonymy hyponymy holonymy and meronymy This dataset was created with the purpose of testing how well models can represent rare and complex words SimLex999 contains 999 word pairs annotated with similarity scores ranging from 0 to 10 In this case the authors explicitly considered similarity and not relatedness addressing the shortcomings of datasets that do not such as MEN and WS353 Words include nouns adjectives and verbs SimVerb3500 contains 3500 verb pairs annotated with similarity scores ranging from 0 to 10 Verbs were obtained from the USF free association database BIBREF66 and VerbNet BIBREF63 This dataset was created to address the lack of representativity of verbs in SimLex999 and the fact that at the time of creation the best performing models had already surpassed interannotator agreement in verb similarity evaluation resources Like SimLex999 this dataset also explicitly considers similarity as opposed to relatedness WS353 contains 353 word pairs annotated with similarity scores from 0 to 10 WS353R is a subset of WS353 containing 252 word pairs annotated with relatedness scores This dataset was created by asking humans to classify each WS353 word pair into one of the following classes synonyms antonyms identical hyperonymhyponym hyponymhyperonym holonymmeronym meronymholonym and noneoftheabove These annotations were later used to group the pairs into similar pairs synonyms antonyms identical hyperonymhyponym and hyponymhyperonym related pairs holonymmeronym meronymholonym and noneoftheabove with a human similarity score greater than 5 and unrelated pairs classified as noneoftheabove with a similarity score less than or equal to 5 This dataset is composed by the union of related and unrelated pairs WS353S is another subset of WS353 containing 203 word pairs annotated with similarity scores This dataset is composed by the union of similar and unrelated pairs as described previously figgatingviz shows that for more common words the vector gate mechanism tends to favor only a few dimensions while keeping a low average gating value across dimensions On the other hand values are greater and more homogeneous across dimensions in rarer words Further figfreqvsgatevalue shows this mechanism assigns on average a greater gating value to less frequent words confirming the findings by BIBREF11 and BIBREF12 In other words the less frequent the word the more this mechanism allows the characterlevel representation to influence the final word representation as shown by eqvg A possible interpretation of this result is that exploiting character information becomes increasingly necessary as wordlevel representations quality decrease Another observable trend in both figures is that gating values tend to be low on average Indeed it is possible to see in figfreqvsgatevalue that the average gating values range from INLINEFORM0 to INLINEFORM1 This result corroborates the findings by BIBREF11 stating that setting INLINEFORM2 in eqscalargate was better than setting it to higher values In summary the gating mechanisms learn how to compensate the lack of expressivity of underrepresented words by selectively combining their representations with those of characters tablesentlevelresults shows the impact that different methods for combining character and wordlevel word representations have in the quality of the sentence representations produced by our models We can observe the same trend mentioned in subsecwordsimilarityeval and highlighted by the difference between bold values that models trained in MultiNLI performed better than those trained in SNLI at a statistically significant level confirming the findings of BIBREF13 In other words training sentence encoders on MultiNLI yields more general sentence representations than doing so on SNLI The two exceptions to the previous trend SICKE and SICKR benefited more from models trained on SNLI We hypothesize this is again due to both SNLI and SICK BIBREF26 having similar data distributions Additionally there was no method that significantly outperformed the word only baseline in classification tasks This means that the added expressivity offered by explicitly modeling characters be it through concatenation or gating was not significantly better than simply finetuning the pretrained GloVe embeddings for this type of task We hypothesize this is due to the conflation of two effects First the fact that morphological processes might not encode important information for solving these tasks and second that SNLI and MultiNLI belong to domains that are too dissimilar to the domains in which the sentence representations are being tested On the other hand the vector gate significantly outperformed every other method in the STSB task when trained in both datasets and in the STS16 task when trained in SNLI This again hints at this method being capable of modeling phenomena at the word level resulting in improved semantic representations at the sentence level It is clear that the better performance the vector gate had in word similarity tasks did not translate into overall better performance in downstream tasks This confirms previous findings indicating that intrinsic word evaluation metrics are not good predictors of downstream performance BIBREF29 BIBREF30 BIBREF20 BIBREF31 subfigmnlicorrelations shows that the word representations created by the vector gate trained in MultiNLI had positivelycorrelated results within several wordsimilarity tasks This hints at the generality of the word representations created by this method when modeling similarity and relatedness However the same cannot be said about sentencelevel evaluation performance there is no clear correlation between word similarity tasks and sentenceevaluation tasks This is clearly illustrated by performance in the STSBenchmark the only in which the vector gate was significantly superior not being correlated with performance in any wordsimilarity dataset This can be interpreted simply as wordlevel representations capturing wordsimilarity not being a sufficient condition for good performance in sentencelevel tasks In general figcorrelations shows that there are no general correlation effects spanning both training datasets and combination mechanisms For example subfigsnlicorrelations shows that for both wordonly and concat models trained in SNLI performance in word similarity tasks correlates positively with performance in most sentence evaluation tasks however this does not happen as clearly for the same models trained in MultiNLI subfigmnlicorrelations To the best of our knowledge there are only two recent works that specifically study how to combine word and subwordlevel vector representations BIBREF11 propose to use a trainable scalar gating mechanism capable of learning a weighting scheme for combining characterlevel and wordlevel representations They compared their proposed method to manually weighting both levels using characters only words only or their concatenation They found that in some datasets a specific manual weighting scheme performed better while in others the learned scalar gate did BIBREF12 further expand the gating concept by making the mechanism work at a finergrained level learning how to weight each vectors dimensions independently conditioned on external wordlevel features such as partofspeech and namedentity tags Similarly they compared their proposed mechanism to using words only characters only and a concatenation of both with and without external features They found that their vector gate performed better than the other methods in all the reported tasks and beat the state of the art in two reading comprehension tasks Both works showed that the gating mechanisms assigned greater importance to characterlevel representations in rare words and to wordlevel representations in common ones reaffirming the previous findings that subword structures in general and characters in particular are beneficial for modeling uncommon words The problem of representing sentences as fixedlength vectors has been widely studied BIBREF32 suggested a selfadaptive hierarchical model that gradually composes words into intermediate phrase representations and adaptively selects specific hierarchical levels for specific tasks BIBREF33 proposed an encoderdecoder model trained by attempting to reconstruct the surrounding sentences of an encoded passage in a fashion similar to Skipgram BIBREF34 BIBREF35 overcame the previous models need for ordered training sentences by using autoencoders for creating the sentence representations BIBREF36 implemented a model simpler and faster to train than the previous two while having competitive performance Similar to BIBREF33 BIBREF37 suggested predicting future sentences with a hierarchical CNNLSTM encoder BIBREF13 trained several sentence encoding architectures on a combination of the SNLI and MultiNLI datasets and showed that a BiLSTM with maxpooling was the best at producing highly transferable sentence representations More recently BIBREF18 empirically showed that sentence representations created in a multitask setting BIBREF38 performed increasingly better the more tasks they were trained in BIBREF39 proposed using an autoencoder that relies on multihead selfattention over the concatenation of the max and mean pooled encoder outputs for producing sentence representations Finally BIBREF40 show that modern sentence embedding methods are not vastly superior to random methods The works mentioned so far usually evaluate the quality of the produced sentence representations in sentencelevel downstream tasks Common benchmarks grouping these kind of tasks include SentEval BIBREF23 and GLUE BIBREF41 Another trend however is to probe sentence representations to understand what linguistic phenomena they encode BIBREF42 BIBREF43 BIBREF44 BIBREF45 BIBREF46 BIBREF47 provide a review on featurewise transformation methods of which the mechanisms presented in this paper form a part of In a few words the INLINEFORM0 parameter in both scalar gate and vector gate mechanisms can be understood as a scaling parameter limited to the INLINEFORM1 range and conditioned on word representations whereas adding the scaled INLINEFORM2 and INLINEFORM3 representations can be seen as biasing word representations conditioned on character representations The previous review extends the work by BIBREF48 which describes the Featurewise Linear Modulation FiLM framework as a generalization of Conditional Normalization methods and apply it in visual reasoning tasks Some of the reported findings are that in general scaling has greater impact than biasing and that in a setting similar to the scalar gate limiting the scaling parameter to INLINEFORM0 hurt performance Future decisions involving the design of mechanisms for combining character and wordlevel representations should be informed by these insights We presented an empirical study showing the effect that different ways of combining character and word representations has in wordlevel and sentencelevel evaluation tasks We showed that a vector gate performed consistently better across a variety of word similarity and relatedness tasks Additionally despite showing inconsistent results in sentence evaluation tasks it performed significantly better than the other methods in semantic similarity tasks We further showed through this mechanism that learning characterlevel representations is always beneficial and becomes increasingly so with less common words In the future it would be interesting to study how the choice of mechanism for combining subword and word representations affects the more recent languagemodelbased pretraining methods such as ELMo BIBREF49 GPT BIBREF50 BIBREF51 and BERT BIBREF52 Thanks to Edison MarreseTaylor and Pablo Loyola for their feedback on early versions of this manuscript We also gratefully acknowledge the support of the NVIDIA Corporation with the donation of one of the GPUs used for this research Jorge A Balazs is partially supported by the Japanese Government MEXT Scholarship We only considered words that appear at least twice for each dataset Those that appeared only once were considered UNK We used the Treebank Word Tokenizer as implemented in NLTK for tokenizing the training and development datasets In the same fashion as conneau2017supervised we used a batch size of 64 an SGD optmizer with an initial learning rate of INLINEFORM0 and at each epoch divided the learning rate by 5 if the validation accuracy decreased We also used gradient clipping when gradients where INLINEFORM1 We defined character vector representations as 50dimensional vectors randomly initialized by sampling from the uniform distribution in the INLINEFORM0 range The output dimension of the characterlevel BiLSTM was 300 per direction and remained of such size after combining forward and backward representations as depicted in eq EQREF9 Word vector representations where initialized from the 300dimensional GloVe vectors BIBREF14 trained in 840B tokens from the Common Crawl and finetuned during training Words not present in the GloVe vocabulary where randomly initialized by sampling from the uniform distribution in the INLINEFORM0 range The input size of the wordlevel LSTM was 300 for every method except concat in which it was 600 and its output was always 2048 per direction resulting in a 4096dimensional sentence representation tablesentenceevaldatasets lists the sentencelevel evaluation datasets used in this paper The provided URLs correspond to the original sources and not necessarily to the URLs where SentEval got the data from The version of the CR MPQA MR and SUBJ datasets used in this paper were the ones preprocessed by BIBREF75 Both SST2 and SST5 correspond to preprocessed versions of the SST dataset by BIBREF74 SST2 corresponds to a subset of SST used by BIBREF54 containing flat representations of sentences annotated with binary sentiment labels and SST5 to another subset annotated with more finegrained sentiment labels very negative negative neutral positive very positive </s>
<s>[INST] Effective Modeling of Encoder-Decoder Architecture for Joint Entity and Relation Extraction [/INST] Distantlysupervised information extraction systems extract relation tuples with a set of predefined relations from text Traditionally researchers BIBREF0 BIBREF1 BIBREF2 use pipeline approaches where a named entity recognition NER system is used to identify the entities in a sentence and then a classifier is used to find the relation or no relation between them However due to the complete separation of entity detection and relation classification these models miss the interaction between multiple relation tuples present in a sentence Recently several neural networkbased models BIBREF3 BIBREF4 were proposed to jointly extract entities and relations from a sentence These models used a parametersharing mechanism to extract the entities and relations in the same network But they still find the relations after identifying all the entities and do not fully capture the interaction among multiple tuples BIBREF5 BIBREF5 proposed a joint extraction model based on neural sequence tagging scheme But their model could not extract tuples with overlapping entities in a sentence as it could not assign more than one tag to a word BIBREF6 BIBREF6 proposed a neural encoderdecoder model for extracting relation tuples with overlapping entities However they used a copy mechanism to copy only the last token of the entities thus this model could not extract the full entity names Also their best performing model used a separate decoder to extract each tuple which limited the power of their model This model was trained with a fixed number of decoders and could not extract tuples beyond that number during inference Encoderdecoder models are powerful models and they are successful in many NLP tasks such as machine translation sentence generation from structured data and open information extraction In this paper we explore how encoderdecoder models can be used effectively for extracting relation tuples from sentences There are three major challenges in this task i The model should be able to extract entities and relations together ii It should be able to extract multiple tuples with overlapping entities iii It should be able to extract exactly two entities of a tuple with their full names To address these challenges we propose two novel approaches using encoderdecoder architecture We first propose a new representation scheme for relation tuples Table TABREF1 such that it can represent multiple tuples with overlapping entities and different lengths of entities in a simple way We employ an encoderdecoder model where the decoder extracts one word at a time like machine translation models At the end of sequence generation due to the unique representation of the tuples we can extract the tuples from the sequence of words Although this model performs quite well generating one word at a time is somewhat unnatural for this task Each tuple has exactly two entities and one relation and each entity appears as a continuous text span in a sentence The most effective way to identify them is to find their start and end location in the sentence Each relation tuple can then be represented using five items start and end location of the two entities and the relation between them see Table TABREF1 Keeping this in mind we propose a pointer networkbased decoding framework This decoder consists of two pointer networks which find the start and end location of the two entities in a sentence and a classification network which identifies the relation between them At every time step of the decoding this decoder extracts an entire relation tuple not just a word Experiments on the New York Times NYT datasets show that our approaches work effectively for this task and achieve stateoftheart performance To summarize the contributions of this paper are as follows 1 We propose a new representation scheme for relation tuples such that an encoderdecoder model which extracts one word at each time step can still find multiple tuples with overlapping entities and tuples with multitoken entities from sentences We also propose a maskingbased copy mechanism to extract the entities from the source sentence only 2 We propose a modification in the decoding framework with pointer networks to make the encoderdecoder model more suitable for this task At every time step this decoder extracts an entire relation tuple not just a word This new decoding framework helps in speeding up the training process and uses less resources GPU memory This will be an important factor when we move from sentencelevel tuple extraction to documentlevel extraction 3 Experiments on the NYT datasets show that our approaches outperform all the previous stateoftheart models significantly and set a new benchmark on these datasets A relation tuple consists of two entities and a relation Such tuples can be found in sentences where an entity is a text span in a sentence and a relation comes from a predefined set R These tuples may share one or both entities among them Based on this we divide the sentences into three classes i No Entity Overlap NEO A sentence in this class has one or more tuples but they do not share any entities ii Entity Pair Overlap EPO A sentence in this class has more than one tuple and at least two tuples share both the entities in the same or reverse order iii Single Entity Overlap SEO A sentence in this class has more than one tuple and at least two tuples share exactly one entity It should be noted that a sentence can belong to both EPO and SEO classes Our task is to extract all relation tuples present in a sentence In this task input to the system is a sequence of words and output is a set of relation tuples In our first approach we represent each tuple as entity1 entity2 relation We use as a separator token to separate the tuple components Multiple tuples are separated using the vert token We have included one example of such representation in Table TABREF1 Multiple relation tuples with overlapping entities and different lengths of entities can be represented in a simple way using these special tokens and vert During inference after the end of sequence generation relation tuples can be extracted easily using these special tokens Due to this uniform representation scheme where entity tokens relation tokens and special tokens are treated similarly we use a shared vocabulary between the encoder and decoder which includes all of these tokens The input sentence contains clue words for every relation which can help generate the relation tokens We use two special tokens so that the model can distinguish between the beginning of a relation tuple and the beginning of a tuple component To extract the relation tuples from a sentence using the encoderdecoder model the model has to generate the entity tokens find relation clue words and map them to the relation tokens and generate the special tokens at appropriate time Our experiments show that the encoderdecoder models can achieve this quite effectively We create a single vocabulary V consisting of the source sentence tokens relation names from relation set R special separator tokens vert startoftargetsequence token SOS endoftargetsequence token EOS and unknown word token UNK Wordlevel embeddings are formed by two components 1 pretrained word vectors 2 character embeddingbased feature vectors We use a word embedding layer mathbf Ew in mathbb Rvert V vert times dw and a character embedding layer mathbf Ec in mathbb Rvert A vert times dc where dw is the dimension of word vectors A is the character alphabet of input sentence tokens and dc is the dimension of character embedding vectors Following BIBREF7 BIBREF7 we use a convolutional neural network with maxpooling to extract a feature vector of size df for every word Word embeddings and character embeddingbased feature vectors are concatenated Vert to obtain the representation of the input tokens A source sentence mathbf S is represented by vectors of its tokens mathbf x1 mathbf x2mathbf xn where mathbf xi in mathbb Rdwdf is the vector representation of the ith word and n is the length of mathbf S These vectors mathbf xi are passed to a bidirectional LSTM BIBREF8 BiLSTM to obtain the hidden representation mathbf hiE We set the hidden dimension of the forward and backward LSTM of the BiLSTM to be dh2 to obtain mathbf hiE in mathbb Rdh where dh is the hidden dimension of the sequence generator LSTM of the decoder described below A target sequence mathbf T is represented by only word embedding vectors of its tokens mathbf y0 mathbf y1mathbf ym where mathbf yi in mathbb Rdw is the embedding vector of the ith token and m is the length of the target sequence mathbf y0 and mathbf ym represent the embedding vector of the SOS and EOS token respectively The decoder generates one token at a time and stops when EOS is generated We use an LSTM as the decoder and at time step t the decoder takes the source sentence encoding mathbf et in mathbb Rdh and the previous target word embedding mathbf yt1 as the input and generates the hidden representation of the current token mathbf htD in mathbb Rdh The sentence encoding vector mathbf et can be obtained using attention mechanism mathbf htD is projected to the vocabulary V using a linear layer with weight matrix mathbf Wv in mathbb Rvert V vert times dh and bias vector mathbf bv in mathbb Rvert V vert projection layer mathbf ot represents the normalized scores of all the words in the embedding vocabulary at time step t mathbf ht1D is the previous hidden state of the LSTM The projection layer of the decoder maps the decoder output to the entire vocabulary During training we use the gold label target tokens directly However during inference the decoder may predict a token from the vocabulary which is not present in the current sentence or the set of relations or the special tokens To prevent this we use a masking technique while applying the softmax operation at the projection layer We mask exclude all words of the vocabulary except the current source sentence tokens relation tokens separator tokens vert UNK and EOS tokens in the softmax operation To mask exclude some word from softmax we set the corresponding value in hatmathbf ot at infty and the corresponding softmax score will be zero This ensures the copying of entities from the source sentence only We include the UNK token in the softmax operation to make sure that the model generates new entities during inference If the decoder predicts an UNK token we replace it with the corresponding source word which has the highest attention score During inference after decoding is finished we extract all tuples based on the special tokens remove duplicate tuples and tuples in which both entities are the same or tuples where the relation token is not from the relation set This model is referred to as WordDecoding WDec henceforth In the second approach we identify the entities in the sentence using their start and end locations We remove the special tokens and relation names from the word vocabulary and word embeddings are used only at the encoder side along with character embeddings We use an additional relation embedding matrix mathbf Er in mathbb Rvert R vert times dr at the decoder side of our model where R is the set of relations and dr is the dimension of relation vectors The relation set R includes a special relation token EOS which indicates the end of the sequence Relation tuples are represented as a sequence Ty0 y1ym where yt is a tuple consisting of four indexes in the source sentence indicating the start and end location of the two entities and a relation between them see Table TABREF1 y0 is a dummy tuple that represents the start tuple of the sequence and ym functions as the end tuple of the sequence which has EOS as the relation entities are ignored for this tuple The decoder consists of an LSTM with hidden dimension dh to generate the sequence of tuples two pointer networks to find the two entities and a classification network to find the relation of a tuple At time step t the decoder takes the source sentence encoding mathbf et in mathbb Rdh and the representation of all previously generated tuples mathbf yprevsum j0t1mathbf yj as the input and generates the hidden representation of the current tuple mathbf htD in mathbb Rdh The sentence encoding vector mathbf et is obtained using an attention mechanism as explained later Relation tuples are a set and to prevent the decoder from generating the same tuple again we pass the information about all previously generated tuples at each time step of decoding mathbf yj is the vector representation of the tuple predicted at time step j t and we use the zero vector mathbf y0overrightarrow0 to represent the dummy tuple y0 mathbf ht1D is the hidden state of the LSTM at time step t1 After obtaining the hidden representation of the current tuple mathbf htD we first find the start and end pointers of the two entities in the source sentence We concatenate the vector mathbf htD with the hidden vectors mathbf hiE of the encoder and pass them to a BiLSTM layer with hidden dimension dp for forward and backward LSTM The hidden vectors of this BiLSTM layer mathbf hik in mathbb R2dp are passed to two feedforward networks FFN with softmax to convert each hidden vector into two scalar values between 0 and 1 Softmax operation is applied across all the words in the input sentence These two scalar values represent the probability of the corresponding source sentence token to be the start and end location of the first entity This BiLSTM layer with the two feedforward layers is the first pointer network which identifies the first entity of the current relation tuple where mathbf Ws1 in mathbb R1 times 2dp mathbf We1 in mathbb R1 times 2dp bs1 and be1 are the weights and bias parameters of the feedforward layers si1 ei1 represent the normalized probabilities of the ith source word being the start and end token of the first entity of the predicted tuple We use another pointer network to extract the second entity of the tuple We concatenate the hidden vectors mathbf hik with mathbf htD and mathbf hiE and pass them to the second pointer network to obtain si2 and ei2 which represent the normalized probabilities of the ith source word being the start and end of the second entity These normalized probabilities are used to find the vector representation of the two entities mathbf at1 and mathbf at2 We concatenate the entity vector representations mathbf at1 and mathbf at2 with mathbf htD and pass it to a feedforward network FFN with softmax to find the relation This feedforward layer has a weight matrix mathbf Wr in mathbb Rvert R vert times 8dp dh and a bias vector mathbf br in mathbb Rvert R vert mathbf rt represents the normalized probabilities of the relation at time step t The relation embedding vector mathbf zt is obtained using mathrm argmax of mathbf rt and mathbf Er mathbf yt in mathbb R8dp dr is the vector representation of the tuple predicted at time step t During training we pass the embedding vector of the gold label relation in place of the predicted relation So the mathrm argmax function does not affect the backpropagation during training The decoder stops the sequence generation process when the predicted relation is EOS This is the classification network of the decoder During inference we select the start and end location of the two entities such that the product of the four pointer probabilities is maximized keeping the constraints that the two entities do not overlap with each other and 1 le b le e le n where b and e are the start and end location of the corresponding entities We first choose the start and end location of entity 1 based on the maximum product of the corresponding start and end pointer probabilities Then we find entity 2 in a similar way excluding the span of entity 1 to avoid overlap The same procedure is repeated but this time we first find entity 2 followed by entity 1 We choose that pair of entities which gives the higher product of four pointer probabilities between these two choices This model is referred to as PtrNetDecoding PNDec henceforth We experimented with three different attention mechanisms for our wordlevel decoding model to obtain the source context vector mathbf et 1 Avg The context vector is obtained by averaging the hidden vectors of the encoder mathbf etfrac1nsum i1n mathbf hiE 2 Ngram The context vector is obtained by the Ngram attention mechanism of BIBREF9 BIBREF9 with N3 textnormal aigmathbf hnET mathbf Vg mathbf wig alpha g mathrm softmaxmathbf ag mathbf etmathbf hnE Vert sum g1N mathbf Wg sum i1vert Gg vert alpha ig mathbf wig Here mathbf hnE is the last hidden state of the encoder g in lbrace 1 2 3rbrace refers to the word gram combination Gg is the sequence of ggram word representations for the input sentence mathbf wig is the ith ggram vector 2gram and 3gram representations are obtained by average pooling alpha ig is the normalized attention score for the ith ggram vector mathbf W in mathbb Rdh times dh and mathbf V in mathbb Rdh times dh are trainable parameters 3 Single The context vector is obtained by the attention mechanism proposed by BIBREF10 BIBREF10 This attention mechanism gives the best performance with the wordlevel decoding model mathbf uti mathbf Wu mathbf hiE quad mathbf qti mathbf Wq mathbf ht1D mathbf bq textnormal ati mathbf va tanh mathbf qti mathbf uti quad alpha t mathrm softmaxmathbf at mathbf et sum i1n alpha ti mathbf hiE where mathbf Wu in mathbb Rdh times dh mathbf Wq in mathbb Rdh times dh and mathbf va in mathbb Rdh are all trainable attention parameters and mathbf bq in mathbb Rdh is a bias vector alpha ti is the normalized attention score of the ith source word at the decoding time step t For our pointer networkbased decoding model we use three variants of the single attention model First we use mathbf ht1D to calculate mathbf qti in the attention mechanism Next we use mathbf yprev to calculate mathbf qti where mathbf Wq in mathbb R8dp dr times dh In the final variant we obtain the attentive context vector by concatenating the two attentive vectors obtained using mathbf ht1D and mathbf yprev This gives the best performance with the pointer networkbased decoding model These variants are referred to as mathrm dechid mathrm tupprev and mathrm combo in Table TABREF17 We minimize the negative loglikelihood loss of the generated words for wordlevel decoding mathcal Lword and minimize the sum of negative loglikelihood loss of relation classification and the four pointer locations for pointer networkbased decoding mathcal Lptr vtb is the softmax score of the target word at time step t for the wordlevel decoding model r s and e are the softmax score of the corresponding true relation label true start and end pointer location of an entity b t and c refer to the bth training instance tth time step of decoding and the two entities of a tuple respectively B and T are the batch size and maximum time step of the decoder respectively We focus on the task of extracting multiple tuples with overlapping entities from sentences We choose the New York Times NYT corpus for our experiments This corpus has multiple versions and we choose the following two versions as their test dataset has significantly larger number of instances of multiple relation tuples with overlapping entities i The first version is used by BIBREF6 BIBREF6 mentioned as NYT in their paper and has 24 relations We name this version as NYT24 ii The second version is used by BIBREF11 BIBREF11 mentioned as NYT10 in their paper and has 29 relations We name this version as NYT29 We select 10 of the original training data and use it as the validation dataset The remaining 90 is used for training We include statistics of the training and test datasets in Table TABREF11 We run the Word2Vec BIBREF12 tool on the NYT corpus to initialize the word embeddings The character embeddings and relation embeddings are initialized randomly All embeddings are updated during training We set the word embedding dimension dw300 relation embedding dimension dr300 character embedding dimension dc50 and characterbased word feature dimension df50 To extract the characterbased word feature vector we set the CNN filter width at 3 and the maximum length of a word at 10 The hidden dimension dh of the decoder LSTM cell is set at 300 and the hidden dimension of the forward and the backward LSTM of the encoder is set at 150 The hidden dimension of the forward and backward LSTM of the pointer networks is set at dp300 The model is trained with minibatch size of 32 and the network parameters are optimized using Adam BIBREF13 Dropout layers with a dropout rate fixed at 03 are used in our network to avoid overfitting We compare our model with the following stateoftheart joint entity and relation extraction models 1 SPTree BIBREF4 This is an endtoend neural entity and relation extraction model using sequence LSTM and Tree LSTM Sequence LSTM is used to identify all the entities first and then Tree LSTM is used to find the relation between all pairs of entities 2 Tagging BIBREF5 This is a neural sequence tagging model which jointly extracts the entities and relations using an LSTM encoder and an LSTM decoder They used a Cartesian product of entity tags and relation tags to encode the entity and relation information together This model does not work when tuples have overlapping entities 3 CopyR BIBREF6 This model uses an encoderdecoder approach for joint extraction of entities and relations It copies only the last token of an entity from the source sentence Their best performing multidecoder model is trained with a fixed number of decoders where each decoder extracts one tuple 4 HRL BIBREF11 This model uses a reinforcement learning RL algorithm with two levels of hierarchy for tuple extraction A highlevel RL finds the relation and a lowlevel RL identifies the two entities using a sequence tagging approach This sequence tagging approach cannot always ensure extraction of exactly two entities 5 GraphR BIBREF14 This model considers each token in a sentence as a node in a graph and edges connecting the nodes as relations between them They use graph convolution network GCN to predict the relations of every edge and then filter out some of the relations 6 Ngram Attention BIBREF9 This model uses an encoderdecoder approach with Ngram attention mechanism for knowledgebase completion using distantly supervised data The encoder uses the source tokens as its vocabulary and the decoder uses the entire Wikidata BIBREF15 entity IDs and relation IDs as its vocabulary The encoder takes the source sentence as input and the decoder outputs the two entity IDs and relation ID for every tuple During training it uses the mapping of entity names and their Wikidata IDs of the entire Wikidata for proper alignment Our task of extracting relation tuples with the raw entity names from a sentence is more challenging since entity names are not of fixed length Our more generic approach is also helpful for extracting new entities which are not present in the existing knowledge bases such as Wikidata We use their Ngram attention mechanism in our model to compare its performance with other attention models Table TABREF17 We use the same evaluation method used by BIBREF11 BIBREF11 in their experiments We consider the extracted tuples as a set and remove the duplicate tuples An extracted tuple is considered as correct if the corresponding full entity names are correct and the relation is also correct We report precision recall and F1 score for comparison Among the baselines HRL achieves significantly higher F1 scores on the two datasets We run their model and our models five times and report the median results in Table TABREF15 Scores of other baselines in Table TABREF15 are taken from previous published papers BIBREF6 BIBREF11 BIBREF14 Our WordDecoding WDec model achieves F1 scores that are 39 and 41 higher than HRL on the NYT29 and NYT24 datasets respectively Similarly our PtrNetDecoding PNDec model achieves F1 scores that are 30 and 13 higher than HRL on the NYT29 and NYT24 datasets respectively We perform a statistical significance test ttest under a bootstrap pairing between HRL and our models and see that the higher F1 scores achieved by our models are statistically significant p 0001 Next we combine the outputs of five runs of our models and five runs of HRL to build ensemble models For a test instance we include those tuples which are extracted in the majority ge 3 of the five runs This ensemble mechanism increases the precision significantly on both datasets with a small improvement in recall as well In the ensemble scenario compared to HRL WDec achieves 42 and 35 higher F1 scores and PNDec achieves 42 and 29 higher F1 scores on the NYT29 and NYT24 datasets respectively We include the performance of different attention mechanisms with our WordDecoding model effects of our maskingbased copy mechanism and ablation results of three variants of the single attention mechanism with our PtrNetDecoding model in Table TABREF17 WordDecoding with single attention achieves the highest F1 score on both datasets We also see that our copy mechanism improves F1 scores by around 47 in each attention mechanism with both datasets PtrNetDecoding achieves the highest F1 scores when we combine the two attention mechanisms with respect to the previous hidden vector of the decoder LSTM mathbf ht1D and representation of all previously extracted tuples mathbf yprev From Table TABREF15 we see that CopyR HRL and our models achieve significantly higher F1 scores on the NYT24 dataset than the NYT29 dataset Both datasets have a similar set of relations and similar texts NYT So taskwise both datasets should pose a similar challenge However the F1 scores suggest that the NYT24 dataset is easier than NYT29 The reason is that NYT24 has around 720 of overlapping tuples between the training and test data of test tuples that appear in the training data with different source sentences In contrast NYT29 has only 417 of overlapping tuples Due to the memorization power of deep neural networks it can achieve much higher F1 score on NYT24 The difference between the F1 scores of WordDecoding and PtrNetDecoding on NYT24 is marginally higher than NYT29 since WordDecoding has more trainable parameters about 27 million than PtrNetDecoding about 245 million and NYT24 has very high tuple overlap However their ensemble versions achieve closer F1 scores on both datasets Despite achieving marginally lower F1 scores the pointer networkbased model can be considered more intuitive and suitable for this task WordDecoding may not extract the special tokens and relation tokens at the right time steps which is critical for finding the tuples from the generated sequence of words PtrNetDecoding always extracts two entities of varying length and a relation for every tuple We also observe that PtrNetDecoding is more than two times faster and takes onethird of the GPU memory of WordDecoding during training and inference This speedup and smaller memory consumption are achieved due to the fewer number of decoding steps of PtrNetDecoding compared to WordDecoding PtrNetDecoding extracts an entire tuple at each time step whereas WordDecoding extracts just one word at each time step and so requires eight time steps on average to extract a tuple assuming that the average length of an entity is two The softmax operation at the projection layer of WordDecoding is applied across the entire vocabulary and the vocabulary size can be large more than 40000 for our datasets In case of PtrNetDecoding the softmax operation is applied across the sentence length maximum of 100 in our experiments and across the relation set 24 and 29 for our datasets The costly softmax operation and the higher number of decoding time steps significantly increase the training and inference time for WordDecoding The encoderdecoder model proposed by BIBREF9 BIBREF9 faces a similar softmaxrelated problem as their target vocabulary contains the entire Wikidata entity IDs and relation IDs which is in the millions HRL which uses a deep reinforcement learning algorithm takes around 8x more time to train than PtrNetDecoding with a similar GPU configuration The speedup and smaller memory consumption will be useful when we move from sentencelevel extraction to documentlevel extraction since document length is much higher than sentence length and a document contains a higher number of tuples The relation tuples extracted by a joint model can be erroneous for multiple reasons such as i extracted entities are wrong ii extracted relations are wrong iii pairings of entities with relations are wrong To see the effects of the first two reasons we analyze the performance of HRL and our models on entity generation and relation generation separately For entity generation we only consider those entities which are part of some tuple For relation generation we only consider the relations of the tuples We include the performance of our two models and HRL on entity generation and relation generation in Table TABREF20 Our proposed models perform better than HRL on both tasks Comparing our two models PtrNetDecoding performs better than WordDecoding on both tasks although WordDecoding achieves higher F1 scores in tuple extraction This suggests that PtrNetDecoding makes more errors while pairing the entities with relations We further analyze the outputs of our models and HRL to determine the errors due to ordering of entities Order mismatch of the first entity Ent1 and mismatch of the second entity Ent2 in Table TABREF21 WordDecoding generates fewer errors than the other two models in all the categories and thus achieves the highest F1 scores on both datasets Traditionally researchers BIBREF0 BIBREF1 BIBREF2 BIBREF16 BIBREF17 BIBREF18 BIBREF19 BIBREF20 BIBREF21 BIBREF22 BIBREF23 BIBREF24 BIBREF25 used a pipeline approach for relation tuple extraction where relations were identified using a classification network after all entities were detected BIBREF26 BIBREF26 used an encoderdecoder model to extract multiple relations present between two given entities Recently some researchers BIBREF3 BIBREF4 BIBREF27 BIBREF28 tried to bring these two tasks closer together by sharing their parameters and optimizing them together BIBREF5 BIBREF5 used a sequence tagging scheme to jointly extract the entities and relations BIBREF6 BIBREF6 proposed an encoderdecoder model with copy mechanism to extract relation tuples with overlapping entities BIBREF11 BIBREF11 proposed a joint extraction model based on reinforcement learning RL BIBREF14 BIBREF14 used a graph convolution network GCN where they treated each token in a sentence as a node in a graph and edges were considered as relations BIBREF9 BIBREF9 used an Ngram attention mechanism with an encoderdecoder model for completion of knowledge bases using distant supervised data Encoderdecoder models have been used for many NLP applications such as neural machine translation BIBREF29 BIBREF10 BIBREF30 sentence generation from structured data BIBREF31 BIBREF32 and open information extraction BIBREF33 BIBREF34 Pointer networks BIBREF35 have been used to extract a text span from text for tasks such as question answering BIBREF36 BIBREF37 For the first time we use pointer networks with an encoderdecoder model to extract relation tuples from sentences Extracting relation tuples from sentences is a challenging task due to different length of entities the presence of multiple tuples and overlapping of entities among tuples In this paper we propose two novel approaches using encoderdecoder architecture to address this task Experiments on the New York Times NYT corpus show that our proposed models achieve significantly improved new stateoftheart F1 scores As future work we would like to explore our proposed models for a documentlevel tuple extraction task We would like to thank the anonymous reviewers for their valuable and constructive comments on this paper </s>
<s>[INST] Learning to Rank Scientific Documents from the Crowd [/INST] blockI1em blocki1em Learning to Rank Scientific Documents from the CrowdLearning to Rank Scientific Documents from the Crowd 4 11 The number of biomedical research papers published has increased dramatically in recent years As of October 2016 PubMed houses over 26 million citations with almost 1 million from the first 3 quarters of 2016 alone It has become impossible for any one person to actually read all of the work being published We require tools to help us determine which research articles would be most informative and related to a particular question or document For example a common task when reading articles is to find articles that are most related to another Major research search engines offer such a related articles feature However we propose that instead of measuring relatedness by textsimilarity measures we build a model that is able to infer relatedness from the authors judgments BIBREF0 consider two kinds of queries important to bibliographic information retrieval the first is a search query written by the user and the second is a request for documents most similar to a document already judged relevant by the user Such a querybydocument or querybyexample system has been implemented in the de facto scientific search engine PubMedcalled Related Citation Search BIBREF1 show that 19 of all PubMed searches performed by users have at least one click on a related article Google Scholar provides a similar Related Articles system Outside of bibliographic retrieval querybydocument systems are commonly used for patent retrieval Internet search and plagiarism detection amongst others Most work in the area of querybydocument uses textbased similarity measures BIBREF2 BIBREF3 BIBREF4 However scientific research is hypothesis driven and therefore we question whether textbased similarity alone is the best model for bibliographic retrieval In this study we asked authors to rank documents by closeness to their work The definition of closeness was left for the authors to interpret as the goal is to model which documents the authors subjectively feel are closest to their own Throughout the paper we will use closeness and relatedness interchangeably We found that researchers ranking by closeness differs significantly from the ranking provided by a traditional IR system Our contributions are three fold The principal ranking algorithms of querybydocument in bibliographic information retrieval rely mainly on text similarity measures BIBREF1 BIBREF0 For example the foundational work of BIBREF0 introduced the concept of a document neighborhood in which they precompute a textsimilarity based distance between each pair of documents When a user issues a query first an initial set of related documents is retrieved Then the neighbors of each of those documents is retrieved ie documents with the highest text similarity to those in the initial set In a later work BIBREF1 develop the PMRA algorithm for PubMed related article search PMRA is an unsupervised probabilistic topic model that is trained to model relatedness between documents BIBREF5 introduce the competing algorithm FindSimilar for this task treating the full text of documents as a query and selecting related documents from the results Outside bibliographic IR prior work in querybydocument includes patent retrieval BIBREF6 BIBREF3 finding related documents given a manuscript BIBREF1 BIBREF7 and web page search BIBREF8 BIBREF9 Much of the work focuses on generating shorter queries from the lengthy document For example nounphrase extraction has been used for extracting short descriptive phrases from the original lengthy text BIBREF10 Topic models have been used to distill a document into a set of topics used to form query BIBREF11 BIBREF6 generated queries using the top TFIDF weighted terms in each document BIBREF4 suggested extracting phrasal concepts from a document which are then used to generate queries BIBREF2 combined query extraction and pseudorelevance feedback for patent retrieval BIBREF9 employ supervised machine learning model ie Conditional Random Fields BIBREF12 for query generation BIBREF13 explored ontology to identify chemical concepts for queries There are also many biomedicaldocument specific search engines available Many information retrieval systems focus on question answering systems such as those developed for the TREC Genomics Track BIBREF14 or BioASQ QuestionAnswer BIBREF15 competitions Systems designed for questionanswering use a combination of natural language processing techniques to identify biomedical entities and then information retrieval systems to extract relevant answers to questions Systems like those detailed in BIBREF16 can provide answers to yesno biomedical questions with high precision However what we propose differs from these systems in a fundamental way given a specific document suggest the most important documents that are related to it The body of work most related to ours is that of citation recommendation The goal of citation recommendation is to suggest a small number of publications that can be used as high quality references for a particular article BIBREF17 BIBREF1 Topic models have been used to rank articles based on the similarity of latent topic distribution BIBREF11 BIBREF18 BIBREF1 These models attempt to decompose a document into a few important keywords Specifically these models attempt to find a latent vector representation of a document that has a much smaller dimensionality than the document itself and compare the reduced dimension vectors Citation networks have also been explored for ranking articles by importance ie authority BIBREF19 BIBREF20 BIBREF17 introduced heterogeneous network models called metapath based models to incorporate venues the conference where a paper is published and content the term which links two articles for citation recommendation Another highly relevant work is BIBREF8 who decomposed a document to represent it with a compact vector which is then used to measure the similarity with other documents Note that we exclude the work of contextaware recommendation which analyze each citations local context which is typically short and does not represent a full document One of the key contributions of our study is an innovative approach for automatically generating a querybydocument gold standard Crowdsourcing has generated large databases including Wikipedia and Freebase Recently BIBREF21 concluded that unpaid participants performed better than paid participants for question answering They attribute this to unpaid participants being more intrinsically motivated than the paid test takers they performed the task for fun and already had knowledge about the subject being tested In contrast another study BIBREF22 compared unpaid workers found through Google Adwords GA to paid workers found through Amazon Mechanical Turk AMT They found that the paid participants from AMT outperform the unpaid ones This is attributed to the paid workers being more willing to look up information they didnt know In the bibliographic domain authors of scientific publications have contributed annotations BIBREF23 They found that authors are more willing to annotate their own publications BIBREF23 than to annotate other publications BIBREF24 even though they are paid In this work our annotated dataset was created by the unpaid authors of the articles In order to develop and evaluate ranking algorithms we need a benchmark dataset However to the best of our knowledge we know of no openly available benchmark dataset for bibliographic querybydocument systems We therefore created such a benchmark dataset The creation of any benchmark dataset is a daunting laborintensive task and in particular challenging in the scientific domain because one must master the technical jargon of a scientific article and such experts are not easy to find when using traditional crowdsourcing technologies eg AMT For our task the ideal annotator for each of our articles are the authors themselves The authors of a publication typically have a clear knowledge of the references they cite and their scientific importance to their publication and therefore may be excellent judges for ranking the reference articles Given the full text of a scientific publication we want to rank its citations according to the authors judgments We collected recent publications from the openaccess PLoS journals and asked the authors to rank by closeness five citations we selected from their paper PLoS articles were selected because its journals cover a wide array of topics and the full text articles are available in XML format We selected the most recent publications as previous work in crowdsourcing annotation shows that authors willingness to participate in an unpaid annotation task declines with the age of publication BIBREF23 We then extracted the abstract citations full text authors and corresponding author email address from each document The titles and abstracts of the citations were retrieved from PubMed and the cosine similarity between the PLoS abstract and the citations abstract was calculated We selected the top five most similar abstracts using TFIDF weighted cosine similarity shuffled their order and emailed them to the corresponding author for annotation We believe that ranking five articles rather than the entire collection of the references is a more manageable task for an author compared to asking them to rank all references Because the documents to be annotated were selected based on text similarity they also represent a challenging baseline for models based on textsimilarity features In total 416 authors were contacted and 92 responded 22 response rate Two responses were removed from the dataset for incomplete annotation We asked authors to rank documents by how close to your work they were The definition of closeness was left to the discretion of the author The dataset is composed of 90 annotated documents with 5 citations each ranked 1 to 5 where 1 is least relevant and 5 is most relevant for a total of 450 annotated citations Learningtorank is a technique for reordering the results returned from a search engine query Generally the initial query to a search engine is concerned more with recall than precision the goal is to obtain a subset of potentially related documents from the corpus Then given this set of potentially related documents learningtorank algorithms reorder the documents such that the most relevant documents appear at the top of the list This process is illustrated in Figure FIGREF6 There are three basic types of learningtorank algorithms pointwise pairwise and listwise Pointwise algorithms assign a score to each retrieved document and rank them by their scores Pairwise algorithms turn learningtorank into a binary classification problem obtaining a ranking by comparing each individual pair of documents Listwise algorithms try to optimize an evaluation parameter over all queries in the dataset Support Vector Machine SVM BIBREF25 is a commonly used supervised classification algorithm that has shown good performance over a range of tasks SVM can be thought of as a binary linear classifier where the goal is to maximize the size of the gap between the classseparating line and the points on either side of the line This helps avoid overfitting on the training data SVMRank is a modification to SVM that assigns scores to each data point and allows the results to be ranked BIBREF26 We use SVMRank in the experiments below SVMRank has previously been used in the task of document retrieval in BIBREF27 for a more traditional short query task and has been shown to be a topperforming system for ranking SVMRank is a pointwise learningtorank algorithm that returns scores for each document We rank the documents by these scores It is possible that sometimes two documents will have the same score resulting in a tie In this case we give both documents the same rank and then leave a gap in the ranking For example if documents 2 and 3 are tied their ranked list will be 5 3 3 2 1 Models are trained by randomly splitting the dataset into 70 training data and 30 test data We apply a random subsampling approach where the dataset is randomly split trained and tested 100 times due to the relatively small size of the data A model is learned for each split and a ranking is produced for each annotated document We test three different supervised models The first supervised model uses only text similarity features the second model uses all of the features and the third model runs forward feature selection to select the best performing combination of features We also test using two different models trained on two different datasets one trained using the gold standard annotations and another trained using the judgments based on text similarity that were used to select the citations to give to the authors We tested several different learning to rank algorithms for this work We found in preliminary testing that SVMRank had the best performance so it will be used in the following experiments Each citation is turned into a feature vector representing the relationship between the published article and the citation Four types of features are used text similarity citation count and location age of the citation and the number of times the citation has appeared in the literature citation impact Text similarity features measure the similarity of the words used in different parts of the document In this work we calculate the similarity between a document INLINEFORM0 and a document it cites INLINEFORM1 by transforming the their text into term vectors For example to calculate the similarity of the abstracts between INLINEFORM2 and INLINEFORM3 we transform the abstracts into two term vectors INLINEFORM4 and INLINEFORM5 The length of each of the term vectors is INLINEFORM6 We then weight each word by its Termfrequency Inversedocument frequency TFIDF weight TFIDF is a technique to give higher weight to words that appear frequently in a document but infrequently in the corpus Term frequency is simply the number of times that a word INLINEFORM7 appears in a document Inversedocument frequency is the logarithmicallyscaled fraction of documents in the corpus in which the word INLINEFORM8 appears Or more specifically INLINEFORM9 where INLINEFORM0 is the total number of documents in the corpus and the denominator is the number of documents in which a term INLINEFORM1 appears in the corpus INLINEFORM2 Then TFIDF is defined as INLINEFORM3 where INLINEFORM0 is a term INLINEFORM1 is the document and INLINEFORM2 is the corpus For example the word the may appear often in a document but because it also appears in almost every document in the corpus it is not useful for calculating similarity thus it receives a very low weight However a word such as neurogenesis may appear often in a document but does not appear frequently in the corpus and so it receives a high weight The similarity between term vectors is then calculated using cosine similarity INLINEFORM3 where INLINEFORM0 and INLINEFORM1 are two term vectors The cosine similarity is a measure of the angle between the two vectors The smaller the angle between the two vectors ie the more similar they are then the closer the value is to 1 Conversely the more dissimilar the vectors the closer the cosine similarity is to 0 We calculate the text similarity between several different sections of the document INLINEFORM0 and the document it cites INLINEFORM1 From the citing article INLINEFORM2 we use the title full text abstract the combined discussionconclusion sections and the 10 words on either side of the place in the document where the actual citation occurs From the document it cites INLINEFORM3 we only use the title and the abstract due to limited availability of the full text In this work we combine the discussion and conclusion sections of each document because some documents have only a conclusion section others have only a discussion and some have both The similarity between each of these sections from the two documents is calculated and used as features in the model The age of the citation may be relevant to its importance As a citation ages we hypothesize that it is more likely to become a foundational citation rather than one that directly influenced the development of the article Therefore more recent citations may be more likely relevant to the article Similarly citation impact that is the number of times a citation has appeared in the literature as measured by Google Scholar may be an indicator of whether or not an article is foundational rather than directly related We hypothesize that the fewer times an article is cited in the literature the more impact it had on the article at hand We also keep track of the number of times a citation is mentioned in both the full text and discussionconclusion sections We hypothesize that if a citation is mentioned multiple times it is more important than citations that are mentioned only once Further citations that appear in the discussionconclusion sections are more likely to be crucial to understanding the results We normalize the counts of the citations by the total number of citations in that section In total we select 15 features shown in Table TABREF15 The features are normalized within each document so that each of citation features is on a scale from 0 to 1 and are evenly distributed within that range This is done because some of the features such as years since citation are unbounded We compare our system to a variety of baselines 1 Rank by the number of times a citation is mentioned in the document 2 Rank by the number of times the citation is cited in the literature citation impact 3 Rank using Google Scholar Related Articles 4 Rank by the TFIDF weighted cosine similarity 5 Rank using a learningtorank model trained on text similarity rankings The first two baseline systems are models where the values are ordered from highest to lowest to generate the ranking The idea behind them is that the number of times a citation is mentioned in an article or the citation impact may already be good indicators of their closeness The text similarity model is trained using the same features and methods used by the annotation model but trained using text similarity rankings instead of the authors judgments We also compare our rankings to those found on the popular scientific article search engine Google Scholar Google Scholar is a black box IR system they do not release details about which features they are using and how they judge relevance of documents Google Scholar provides a Related Articles feature for each document in its index that shows the top 100 related documents for each article To compare our rankings we search through these related documents and record the ranking at which each of the citations we selected appeared We scale these rankings such that the lowest ranked article from Google Scholar has the highest relevance ranking in our set If the cited document does not appear in the set we set its relevanceranking equal to one below the lowest relevance ranking found Four comparisons are performed with the Google Scholar data 1 We first train a model using our gold standard and see if we can predict Google Scholars ranking 2 We compare to a baseline of using Google Scholars rankings to train and compare with their own rankings using our feature set 3 Then we train a model using Google Scholars rankings and try to predict our gold standard 4 We compare it to the model trained on our gold standard to predict our gold standard Normalized Discounted Cumulative Gain NDCG is a common measure for comparing a list of estimated document relevance judgments with a list of known judgments BIBREF28 To calculate NDCG we first calculate a rankings Discounted Cumulative Gain DCG as DISPLAYFORM0 where rel INLINEFORM0 is the relevance judgment at position INLINEFORM1 Intuitively DCG penalizes retrieval of documents that are not relevant rel INLINEFORM2 However DCG is an unbounded value In order to compare the DCG between two models we must normalize it To do this we use the ideal DCG IDCG ie the maximum possible DCG given the relevance judgments The maximum possible DCG occurs when the relevance judgments are in the correct order DISPLAYFORM0 The NDCG value is in the range of 0 to 1 where 0 means that no relevant documents were retrieved and 1 means that the relevant documents were retrieved and in the correct order of their relevance judgments Kendalls INLINEFORM0 is a measure of the correlation between two ranked lists It compares the number of concordant pairs with the number of discordant pairs between each list A concordant pair is defined over two observations INLINEFORM1 and INLINEFORM2 If INLINEFORM3 and INLINEFORM4 then the pair at indices INLINEFORM5 is concordant that is the ranking at INLINEFORM6 in both ranking sets INLINEFORM7 and INLINEFORM8 agree with each other Similarly a pair INLINEFORM9 is discordant if INLINEFORM10 and INLINEFORM11 or INLINEFORM12 and INLINEFORM13 Kendalls INLINEFORM14 is then defined as DISPLAYFORM0 where C is the number of concordant pairs D is the number of discordant pairs and the denominator represents the total number of possible pairs Thus Kendalls INLINEFORM0 falls in the range of INLINEFORM1 where 1 means that the ranked lists are perfectly negatively correlated 0 means that they are not significantly correlated and 1 means that the ranked lists are perfectly correlated One downside of this measure is that it does not take into account where in the ranked list an error occurs Information retrieval in general cares more about errors near the top of the list rather than errors near the bottom of the list AveragePrecision INLINEFORM0 BIBREF29 or INLINEFORM1 extends on Kendalls INLINEFORM2 by incorporating the position of errors If an error occurs near the top of the list then that is penalized heavier than an error occurring at the bottom of the list To achieve this INLINEFORM3 incorporates ideas from the popular Average Precision measure were we calculate the precision at each index of the list and then average them together INLINEFORM4 is defined as DISPLAYFORM0 Intuitively if an error occurs at the top of the list then that error is propagated into each iteration of the summation meaning that its penalty is added multiple times INLINEFORM0 s range is between 1 and 1 where 1 means the lists are perfectly negatively correlated 0 means that they are not significantly correlated and 1 means that they are perfectly correlated Forward feature selection was performed by iteratively testing each feature one at a time The highest performing feature is kept in the model and another sweep is done over the remaining features This continues until all features have been selected This approach allows us to explore the effect of combinations of features and the effect of having too many or too few features It also allows us to evaluate which features and combinations of features are the most powerful We first compare our gold standard to the baselines A random baseline is provided for reference Because all of the documents that we rank are relevant NDCG will be fairly high simply by chance We find that the number of times a document is mentioned in the annotated document is significantly better than the random baseline or the citation impact The more times a document is mentioned in a paper the more likely the author was to annotate it as important Interestingly we see a negative correlation with the citation impact The more times a document is mentioned in the literature the less likely it is to be important These results are shown in Table TABREF14 Next we rank the raw values of the features and compare them to our gold standard to obtain a baseline Table TABREF15 The best performing text similarity feature is the similarity between the abstract of the annotated document and the abstract of the cited document However the number of times that a cited document is mentioned in the text of the annotated document are also highscoring features especially in the INLINEFORM0 correlation coefficient These results indicate that text similarity alone may not be a good measure for judging the rank of a document Next we test three different feature sets for our supervised learningtorank models The model using only the text similarity features performs poorly NDCG stays at baseline and the correlation measures are low Models that incorporate information about the age number of times a cited document was referenced and the citation impact of that document in addition to the text similarity features significantly outperformed models that used only text similarity features INLINEFORM0 Because INLINEFORM1 takes into account the position in the ranking of the errors this indicates that the All Features model was able to better correctly place highly ranked documents above lower ranked ones Similarly because Kendalls INLINEFORM2 is an overall measure of correlation that does not take into account the position of errors the higher value here means that more rankings were correctly placed Interestingly feature selection which is optimized for NDCG does not outperform the model using all of the features in terms of our correlation measures The features chosen during forward feature selection are 1 the citation impact 2 number of mentions in the full text 3 text similarity between the annotated documents title and the referenced documents abstract 4 the text similarity between the annotated documents discussionconclusion section and the referenced documents title These results are shown in Table TABREF16 The models trained on the text similarity judgments perform worse than the models trained on the annotated data However in terms of both NDCG and the correlation measures they perform significantly better than the random baseline Next we compare our model to Google Scholars rankings Using the ranking collected from Google Scholar we build a training set to try to predict our authors rankings We find that Google Scholar performs similarly to the textonly features model This indicates that the rankings we obtained from the authors are substantially different than the rankings that Google Scholar provides Results appear in Table TABREF17 We found that authors rank the references they cite substantially differently from rankings based on textsimilarity Our results show that decomposing a document into a set of features that is able to capture that difference is key While text similarity is indeed important as evidenced by the Similarityaa feature in Table TABREF15 we also found that the number of times a document is referenced in the text and the number of times a document is referenced in the literature are also both important features via feature selection The more often a citation is mentioned in the text the more likely it is to be important This feature is often overlooked in article citation recommendation We also found that recency is important the age of the citation is negatively correlated with the rank Newer citations are more likely to be directly important than older more foundational citations Additionally the number of times a document is cited in the literature is negatively correlated with rank This is likely due to highly cited documents being more foundational works they may be older papers that are important to the field but not directly influential to the new work The model trained using the authors judgments does significantly better than the model trained using the textsimilaritybased judgments An error analysis was performed to find out why some of the rankings disagreed with the authors annotations We found that in some cases our features were unable to capture the relationship for example a biomedical document applying a model developed in another field to the dataset may use very different language to describe the model than the citation Previous work adopting topic models to query document search may prove useful for such cases A small subset of features ended up performing as well as the full list of features The number of times a citation was mentioned and the citation impact score in the literature ended up being two of the most important features Indeed without the citationbased features the model performs as though it were trained with the textsimilarity rankings Feature engineering is a part of any learningtorank system especially in domainspecific contexts Citations are an integral feature of our dataset For learningtorank to be applied to other datasets feature engineering must also occur to exploit the unique properties of those datasets However we show that combining the domainspecific features with more traditional textbased features does improve the models scores over simply using the domainspecific features themselves Interestingly citation impact and age of the citation are both negatively correlated with rank We hypothesize that this is because both measures can be indicators of recency a new publication is more likely to be directly influenced by more recent work Many other related search tools however treat the citation impact as a positive feature of relatedness documents with a higher citation impact appear higher on the list of related articles than those with lower citation impacts This may be the opposite of what the user actually desires We also found that rankings from our textsimilarity based IR system or Google Scholars IR system were unable to rank documents by the authors annotations as well as our system In one sense this is reasonable the rankings coming from these systems were from a different system than the author annotations However in domainspecific IR domain experts are the best judges We built a system that exploits these expert judgments The text similarity and Google Scholar models were able to do this to some extent performing above the random baseline but not on the level of our model Additionally we observe that NDCG may not be the most appropriate measure for comparing short ranked lists where all of the documents are relevant to some degree NDCG gives a lot of credit to relevant documents that occur in the highest ranks However all of the documents here are relevant just to varying degrees Thus NDCG does not seem to be the most appropriate measure as is evident in our scores The correlation coefficients from Kendalls INLINEFORM0 and INLINEFORM1 seem to be far more appropriate for this case as they are not concerned with relevance only ranking One limitation of our work is that we selected a small set of references based on their similarities to the article that cites them Ideally we would have had authors rank all of their citations for us but this would have been a daunting task for authors to perform We chose to use the Google Scholar dataset in order to attempt to mitigate this we obtain a ranking for the set of references from a system that is also ranking many other documents The five citations selected by TFIDF weighted cosine similarity represent a hard gold standard we are attempting to rank documents that are known to all be relevant by their nature and have high similarity with the text Additionally there are plethora of other more expensive features we could explore to improve the model Citation network features phrasal concepts and topic models could all be used to help improve our results at the cost of computational complexity We have developed a model for fast relateddocument ranking based on crowdsourced data The model data and data collection software are all publicly available and can easily be used in future applications as an automatic search to help users find the most important citations given a particular document The experimental setup is portable to other datasets with some feature engineering We were able to identify that several domainspecific features were crucial to our model and that we were able to improve on the results of simply using those features alone by adding more traditional features Querybydocument is a complicated and challenging task We provide an approach with an easily obtained dataset and a computationally inexpensive model By working with biomedical researchers we were able to build a system that ranks documents in a quantitatively different way than previous systems and to provide a tool that helps researchers find related documents We would like to thank all of the authors who took the time to answer our citation ranking survey This work is supported by National Institutes of Health with the grant number 1R01GM095476 The funders had no role in study design data collection and analysis decision to publish or preparation of the manuscript </s>
<s>[INST] Exploiting Deep Learning for Persian Sentiment Analysis [/INST] In recent years social media forums blogs and other forms of online communication tools have radically affected everyday life especially how people express their opinions and comments The extraction of useful information such as peoples opinion about companies brand from the huge amount of unstructured data is vital for most companies and organizations BIBREF0 The product reviews are important for business owners as they can take business decision accordingly to automatically classify users opinions towards products and services The application of sentiment analysis is not limited to product or movie reviews but can be applied to different fields such as news politics sport etc For example in online political debates the sentiment analysis can be used to identify peoples opinions on a certain election candidate or political parties BIBREF1 BIBREF2 BIBREF3 In this context sentiment analysis has been widely used in different languages by using traditional and advanced machine learning techniques However limited research has been conducted to develop models for the Persian language The sentiment analysis is a method to automatically process large amounts of data and classify text into positive or negative sentiments BIBREF4 BIBREF5 Sentiment analysis can be performed at two levels at the document level or at the sentence level At document level it is used to classify the sentiment expressed in the document positive or negative whereas at sentence level is used to identify the sentiments expressed only in the sentence under analysis BIBREF6 BIBREF7 In the literature deep learning based automated feature extraction has been shown to outperform stateoftheart manual feature engineering based classifiers such as Support Vector Machine SVM Naive Bayes NB or Multilayer Perceptron MLP etc One of the important techniques in deep learning is the autoencoder that generally involves reducing the number of feature dimensions under consideration The aim of dimensionality reduction is to obtain a set of principal variables to improve the performance of the approach Similarly CNNs have been proven to be very effective in sentiment analysis However little work has been carried out to exploit deep learning based feature representation for Persian sentiment analysis BIBREF8 BIBREF9 In this paper we present two deep learning models deep autoencoders and CNNs for Persian sentiment analysis The obtained deep learning results are compared with MLP The rest of the paper is organized as follows Section 2 presents related work Section 3 presents methodology and experimental results Finally section 4 concludes this paper In the literature extensive research has been carried out to model novel sentiment analysis models using both shallow and deep learning algorithms For example the authors in BIBREF10 proposed a novel deep learning approach for polarity detection in product reviews The authors addressed two major limitations of stacked denoising of autoencoders high computational cost and the lack of scalability of high dimensional features Their experimental results showed the effectiveness of proposed autoencoders in achieving accuracy upto 87 Zhai et al BIBREF11 proposed a five layers autoencoder for learning the specific representation of textual data The autoencoders are generalised using loss function and derived discriminative loss function from label information The experimental results showed that the model outperformed bag of words denoising autoencoders and other traditional methods achieving accuracy rate up to 85 Sun et al BIBREF12 proposed a novel method to extract contextual information from text using a convolutional autoencoder architecture The experimental results showed that the proposed model outperformed traditional SVM and Nave Bayes models reporting accuracy of 831 639 and 678 respectively Su et al BIBREF13 proposed an approach for a neural generative autoencoder for learning bilingual word embedding The experimental results showed the effectiveness of their approach on EnglishChinese EnglishGerman EnglishFrench and EnglishSpanish 7536 accuracy Kim et al BIBREF14 proposed a method to capture the nonlinear structure of data using CNN classifier The experimental results showed the effectiveness of the method on the multidomain dataset movie reviews and product reviews However the disadvantage is only SVM and Naive Bayes classifiers are used to evaluate the performance of the method and deep learning classifiers are not exploited Zhang et al BIBREF15 proposed an approach using deep learning classifiers to detect polarity in Japanese movie reviews The approach used denoising autoencoder and adapted to other domains such as product reviews The advantage of the approach is not depended on any language and could be used for various languages by applying different datasets AP et al BIBREF16 proposed a CNN based model for crosslanguage learning of vectorial word representations that is coherent between two languages The method is evaluated using English and German movie reviews dataset The experimental results showed CNN 8345 accuracy outperformed as compared to SVM 6525 accuracy Zhou et al BIBREF17 proposed an autoencoder architecture constituting an LSTMencoder and decoder in order to capture features in the text and reduce dimensionality of data The LSTM encoder used the interactive scheme to go through the sequence of sentences and LSTM decoder reconstructed the vector of sentences The model is evaluated using different datasets such as book reviews DVD reviews and music reviews acquiring accuracy up to 8105 8106 and 7940 respectively Mesnil et al BIBREF18 proposed an approach using ensemble classification to detect polarity in the movie reviews The authors combined several machine learning algorithms such as SVM Naive Bayes and RNN to achieve better results where autoencoders were used to reduce the dimensionality of features The experimental results showed the combination of unigram bigram and trigram features 9187 accuracy outperformed unigram 9156 accuracy and bigram 8861 accuracy Scheible et al BIBREF19 trained an approach using semisupervised recursive autoencoder to detect polarity in movie reviews dataset consisted of 5000 positive and 5000 negative sentiments The experimental results demonstrated that the proposed approach successfully detected polarity in movie reviews dataset 8313 accuracy and outperformed standard SVM 6836 accuracy model Dai et al BIBREF20 developed an autoencoder to detect polarity in the text using deep learning classifier The LSTM was trained on IMDB movie reviews dataset The experimental results showed the outperformance of their proposed approach over SVM In table 1 some of the autoencoder approaches are depicted The novel dataset used in this work was collected manually and includes Persian movie reviews from 2014 to 2016 A subset of dataset was used to train the neural network 60 training dataset and rest of the data 40 was used to test and validate the performance of the trained neural network testing set 30 validation set 10 There are two types of labels in the dataset positive or negative The reviews were manually annotated by three native Persian speakers aged between 30 and 50 years old After data collection the corpus was preprocessed using tokenisation normalisation and stemming techniques The process of converting sentences into single word or token is called tokenisation For example The movie is great is changed to The movie is great BIBREF21 There are some words which contain numbers For example great is written as gr8 or gooood as written as good The normalisation is used to convert these words into normal forms BIBREF22 The process of converting words into their root is called stemming For example going was changed to go BIBREF23 Words were converted into vectors The fasttext was used to convert each word into 300dimensions vectors Fasttext is a library for text classification and representation BIBREF24 BIBREF25 BIBREF9 For classification MLP autoencoders and CNNs have been used Fig 1 depicts the modelled MLP architectures MLP classifer was trained for 100 iterations BIBREF26 Fig 2 depicts the modelled autoencoder architecture Autoencoder is a feedforward deep neural network with unsupervised learning and it is used for dimensionality reduction The autoencoder consists of input output and hidden layers Autoencoder is used to compress the input into a latentspace and then the output is reconstructed BIBREF27 BIBREF28 BIBREF29 The exploited autoencoder model is depcited in Fig 1 The autoencoder consists of one input layer three hidden layers 1500 512 1500 and an output layer Convolutional Neural Networks contains three layers input hidden and output layer The hidden layer consists of convolutional layers pooling layers fully connected layers and normalisation layer The INLINEFORM0 is denotes the hidden neurons of j with bias of INLINEFORM1 is a weight sum over continuous visible nodes v which is given by DISPLAYFORM0 The modelled CNN architecture is depicted in Fig 3 BIBREF29 BIBREF28 For CNN modelling each utterance was represented as a concatenation vector of constituent words The network has total 11 layers 4 convolution layers 4 max pooling and 3 fully connected layers Convolution layers have filters of size 2 and with 15 feature maps Each convolution layer is followed by a max polling layer with window size 2 The last max pooling layer is followed by fully connected layers of size 5000 500 and 4 For final layer softmax activation is used To evaluate the performance of the proposed approach precision 1 recall 2 fMeasure 3 and prediction accuracy 4 have been used as a performance matrices The experimental results are shown in Table 1 where it can be seen that autoencoders outperformed MLP and CNN outperformed autoencoders with the highest achieved accuracy of 826 DISPLAYFORM0 DISPLAYFORM1 where TP is denotes true positive TN is true negative FP is false positive and FN is false negative Sentiment analysis has been used extensively for a wide of range of realworld applications ranging from product reviews surveys feedback to business intelligence and operational improvements However the majority of research efforts are devoted to Englishlanguage only where information of great importance is also available in other languages In this work we focus on developing sentiment analysis models for Persian language specifically for Persian movie reviews Two deep learning models deep autoencoders and deep CNNs are developed and compared with the the stateoftheart shallow MLP based machine learning model Simulations results revealed the outperformance of our proposed CNN model over autoencoders and MLP In future we intend to exploit more advanced deep learning models such as Long ShortTerm Memory LSTM and LSTMCNNs to further evaluate the performance of our developed novel Persian dataset Amir Hussain and Ahsan Adeel were supported by the UK Engineering and Physical Sciences Research Council EPSRC grant NoEPM0269811 </s>
<s>[INST] Talk the Walk: Navigating New York City through Grounded Dialogue [/INST] 0pt00303 0pt003003 0pt003003 We introduce Talk The Walk the first largescale dialogue dataset grounded in action and perception The task involves two agents a guide and a tourist that communicate via natural language in order to achieve a common goal having the tourist navigate to a given target location The task and dataset which are described in detail are challenging and their full solution is an open problem that we pose to the community We i focus on the task of tourist localization and develop the novel Masked Attention for Spatial Convolutions MASC mechanism that allows for grounding tourist utterances into the guides map ii show it yields significant improvements for both emergent and natural language communication and iii using this method we establish nontrivial baselines on the full task As artificial intelligence plays an ever more prominent role in everyday human lives it becomes increasingly important to enable machines to communicate via natural languagenot only with humans but also with each other Learning algorithms for natural language understanding such as in machine translation and reading comprehension have progressed at an unprecedented rate in recent years but still rely on static largescale textonly datasets that lack crucial aspects of how humans understand and produce natural language Namely humans develop language capabilities by being embodied in an environment which they can perceive manipulate and move around in and by interacting with other humans Hence we argue that we should incorporate all three fundamental aspects of human language acquisitionperception action and interactive communicationand develop a task and dataset to that effect We introduce the Talk the Walk dataset where the aim is for two agents a guide and a tourist to interact with each other via natural language in order to achieve a common goal having the tourist navigate towards the correct location The guide has access to a map and knows the target location but does not know where the tourist is the tourist has a 360degree view of the world but knows neither the target location on the map nor the way to it The agents need to work together through communication in order to successfully solve the task An example of the task is given in Figure FIGREF3 Grounded language learning has regained traction in the AI community and much attention is currently devoted to virtual embodimentthe development of multiagent communication tasks in virtual environmentswhich has been argued to be a viable strategy for acquiring natural language semantics BIBREF0 Various related tasks have recently been introduced but in each case with some limitations Although visually grounded dialogue tasks BIBREF1 BIBREF2 comprise perceptual grounding and multiagent interaction their agents are passive observers and do not act in the environment By contrast instructionfollowing tasks such as VNL BIBREF3 involve action and perception but lack natural language interaction with other agents Furthermore some of these works use simulated environments BIBREF4 andor templated language BIBREF5 which arguably oversimplifies real perception or natural language respectively See Table TABREF15 for a comparison Talk The Walk is the first task to bring all three aspects together perception for the tourist observing the world action for the tourist to navigate through the environment and interactive dialogue for the tourist and guide to work towards their common goal To collect grounded dialogues we constructed a virtual 2D grid environment by manually capturing 360views of several neighborhoods in New York City NYC As the main focus of our task is on interactive dialogue we limit the difficulty of the control problem by having the tourist navigating a 2D grid via discrete actions turning left turning right and moving forward Our street view environment was integrated into ParlAI BIBREF6 and used to collect a largescale dataset on Mechanical Turk involving human perception action and communication We argue that for artificial agents to solve this challenging problem some fundamental architecture designs are missing and our hope is that this task motivates their innovation To that end we focus on the task of localization and develop the novel Masked Attention for Spatial Convolutions MASC mechanism To model the interaction between language and action this architecture repeatedly conditions the spatial dimensions of a convolution on the communicated message sequence This work makes the following contributions 1 We present the first large scale dialogue dataset grounded in action and perception 2 We introduce the MASC architecture for localization and show it yields improvements for both emergent and natural language 4 Using localization models we establish initial baselines on the full task 5 We show that our best model exceeds human performance under the assumption of perfect perception and with a learned emergent communication protocol and sets a nontrivial baseline with natural language We create a perceptual environment by manually capturing several neighborhoods of New York City NYC with a 360 camera Most parts of the city are gridlike and uniform which makes it wellsuited for obtaining a 2D grid For Talk The Walk we capture parts of Hells Kitchen East Village the Financial District Williamsburg and the Upper East Sidesee Figure FIGREF66 in Appendix SECREF14 for their respective locations within NYC For each neighborhood we choose an approximately 5x5 grid and capture a 360 view on all four corners of each intersection leading to a gridsize of roughly 10x10 per neighborhood The tourists location is given as a tuple INLINEFORM0 where INLINEFORM1 are the coordinates and INLINEFORM2 signifies the orientation north east south or west The tourist can take three actions turn left turn right and go forward For moving forward we add INLINEFORM3 INLINEFORM4 INLINEFORM5 INLINEFORM6 to the INLINEFORM7 coordinates for the respective orientations Upon a turning action the orientation is updated by INLINEFORM8 where INLINEFORM9 for left and INLINEFORM10 for right If the tourist moves outside the grid we issue a warning that they cannot go in that direction and do not update the location Moreover tourists are shown different types of transitions a short transition for actions that bring the tourist to a different corner of the same intersection and a longer transition for actions that bring them to a new intersection The guide observes a map that corresponds to the tourists environment We exploit the fact that urban areas like NYC are full of local businesses and overlay the map with these landmarks as localization points for our task Specifically we manually annotate each corner of the intersection with a set of landmarks INLINEFORM0 each coming from one of the following categories Bar Playfield Bank Hotel Shop Subway Coffee Shop Restaurant Theater The rightside of Figure FIGREF3 illustrates how the map is presented Note that withinintersection transitions have a smaller grid distance than transitions to new intersections To ensure that the localization task is not too easy we do not include street names in the overhead map and keep the landmark categories coarse That is the dialogue is driven by uncertainty in the tourists current location and the properties of the target location if the exact location and orientation of the tourist were known it would suffice to communicate a sequence of actions For the Talk The Walk task we randomly choose one of the five neighborhoods and subsample a 4x4 grid one block with four complete intersections from the entire grid We specify the boundaries of the grid by the topleft and bottomright corners INLINEFORM0 Next we construct the overhead map of the environment ie INLINEFORM1 with INLINEFORM2 and INLINEFORM3 We subsequently sample a start location and orientation INLINEFORM4 and a target location INLINEFORM5 at random The shared goal of the two agents is to navigate the tourist to the target location INLINEFORM0 which is only known to the guide The tourist perceives a street view planar projection INLINEFORM1 of the 360 image at location INLINEFORM2 and can simultaneously chat with the guide and navigate through the environment The guides role consists of reading the tourist description of the environment building a mental map of their current position and providing instructions for navigating towards the target location Whenever the guide believes that the tourist has reached the target location they instruct the system to evaluate the tourists location The task ends when the evaluation is successfulie when INLINEFORM3 or otherwise continues until a total of three failed attempts The additional attempts are meant to ease the task for humans as we found that they otherwise often fail at the task but still end up close to the target location eg at the wrong corner of the correct intersection We crowdsourced the collection of the dataset on Amazon Mechanical Turk MTurk We use the MTurk interface of ParlAI BIBREF6 to render 360 images via WebGL and dynamically display neighborhood maps with an HTML5 canvas Detailed task instructions which were also given to our workers before they started their task are shown in Appendix SECREF15 We paired Turkers at random and let them alternate between the tourist and guide role across different HITs The Talk The Walk dataset consists of over 10k successful dialoguessee Table FIGREF66 in the appendix for the dataset statistics split by neighborhood Turkers successfully completed INLINEFORM0 of all finished tasks we use this statistic as the human success rate More than six hundred participants successfully completed at least one Talk The Walk HIT Although the Visual Dialog BIBREF2 and GuessWhat BIBREF1 datasets are larger the collected Talk The Walk dialogs are significantly longer On average Turkers needed more than 62 acts ie utterances and actions before they successfully completed the task whereas Visual Dialog requires 20 acts The majority of acts comprise the tourists actions with on average more than 44 actions per dialogue The guide produces roughly 9 utterances per dialogue slightly more than the tourists 8 utterances Turkers use diverse discourse with a vocabulary size of more than 10K calculated over all successful dialogues An example from the dataset is shown in Appendix SECREF14 The dataset is available at httpsgithubcomfacebookresearchtalkthewalk We investigate the difficulty of the proposed task by establishing initial baselines The final Talk The Walk task is challenging and encompasses several important subtasks ranging from landmark recognition to tourist localization and natural language instructiongiving Arguably the most important subtask is localization without such capabilities the guide can not tell whether the tourist reached the target location In this work we establish a minimal baseline for Talk The Walk by utilizing agents trained for localization Specifically we let trained tourist models undertake random walks using the following protocol at each step the tourist communicates its observations and actions to the guide who predicts the tourists location If the guide predicts that the tourist is at target we evaluate its location If successful the task ends otherwise we continue until there have been three wrong evaluations The protocol is given as pseudocode in Appendix SECREF12 The designed navigation protocol relies on a trained localization model that predicts the tourists location from a communicated message Before we formalize this localization subtask in Section UID21 we further introduce two simplifying assumptionsperfect perception and orientationagnosticismso as to overcome some of the difficulties we encountered in preliminary experiments paragraph4 01ex plus01ex minus1ex1em Perfect Perception Early experiments revealed that perceptual grounding of landmarks is difficult we set up a landmark classification problem on which models with extracted CNN BIBREF7 or text recognition features BIBREF8 barely outperform a random baselinesee Appendix SECREF13 for full details This finding implies that localization models from image input are limited by their ability to recognize landmarks and as a result would not generalize to unseen environments To ensure that perception is not the limiting factor when investigating the landmarkgrounding and actiongrounding capabilities of localization models we assume perfect perception in lieu of the 360 image view the tourist is given the landmarks at its current location More formally each state observation INLINEFORM0 now equals the set of landmarks at the INLINEFORM1 location ie INLINEFORM2 If the INLINEFORM3 location does not have any visible landmarks we return a single empty corner symbol We stress that our findingsincluding a novel architecture for grounding actions into an overhead map see Section UID28 should carry over to settings without the perfect perception assumption paragraph4 01ex plus01ex minus1ex1em Orientationagnosticism We opt to ignore the tourists orientation which simplifies the set of actions to Left Right Up Down corresponding to adding 1 0 1 0 0 1 0 1 to the current INLINEFORM0 coordinates respectively Note that actions are now coupled to an orientation on the mapeg up is equal to going northand this implicitly assumes that the tourist has access to a compass This also affects perception since the tourist now has access to views from all orientations in conjunction with perfect perception implying that only landmarks at the current corner are given whereas landmarks from different corners eg across the street are not visible Even with these simplifications the localizationbased baseline comes with its own set of challenges As we show in Section SECREF34 the task requires communication about a short random pathie not only a sequence of observations but also actionsin order to achieve high localization accuracy This means that the guide needs to decode observations from multiple time steps as well as understand their 2D spatial arrangement as communicated via the sequence of actions Thus in order to get to a good understanding of the task we thoroughly examine whether the agents can learn a communication protocol that simultaneously grounds observations and actions into the guides map In doing so we thoroughly study the role of the communication channel in the localization task by investigating increasingly constrained forms of communication from differentiable continuous vectors to emergent discrete symbols to the full complexity of natural language The full navigation baseline hinges on a localization model from random trajectories While we can sample random actions in the emergent communication setup this is not possible for the natural language setup because the messages are coupled to the trajectories of the human annotators This leads to slightly different problem setups as described below paragraph4 01ex plus01ex minus1ex1em Emergent language A tourist starting from a random location takes INLINEFORM0 random actions INLINEFORM1 to reach target location INLINEFORM2 Every location in the environment has a corresponding set of landmarks INLINEFORM3 for each of the INLINEFORM4 coordinates As the tourist navigates the agent perceives INLINEFORM5 stateobservations INLINEFORM6 where each observation INLINEFORM7 consists of a set of INLINEFORM8 landmark symbols INLINEFORM9 Given the observations INLINEFORM10 and actions INLINEFORM11 the tourist generates a message INLINEFORM12 which is communicated to the other agent The objective of the guide is to predict the location INLINEFORM13 from the tourists message INLINEFORM14 paragraph4 01ex plus01ex minus1ex1em Natural language In contrast to our emergent communication experiments we do not take random actions but instead extract actions observations and messages from the dataset Specifically we consider each tourist utterance ie at any point in the dialogue obtain the current tourist location as target location INLINEFORM0 the utterance itself as message INLINEFORM1 and the sequence of observations and actions that took place between the current and previous tourist utterance as INLINEFORM2 and INLINEFORM3 respectively Similar to the emergent language setting the guides objective is to predict the target location INLINEFORM4 models from the tourist message INLINEFORM5 We conduct experiments with INLINEFORM6 taken from the dataset and with INLINEFORM7 generated from the extracted observations INLINEFORM8 and actions INLINEFORM9 This section outlines the tourist and guide architectures We first describe how the tourist produces messages for the various communication channels across which the messages are sent We subsequently describe how these messages are processed by the guide and introduce the novel Masked Attention for Spatial Convolutions MASC mechanism that allows for grounding into the 2D overhead map in order to predict the tourists location For each of the communication channels we outline the procedure for generating a message INLINEFORM0 Given a set of state observations INLINEFORM1 we represent each observation by summing the INLINEFORM2 dimensional embeddings of the observed landmarks ie for INLINEFORM3 INLINEFORM4 where INLINEFORM5 is the landmark embedding lookup table In addition we embed action INLINEFORM6 into a INLINEFORM7 dimensional embedding INLINEFORM8 via a lookup table INLINEFORM9 We experiment with three types of communication channel paragraph4 01ex plus01ex minus1ex1em Continuous vectors The tourist has access to observations of several time steps whose order is important for accurate localization Because summing embeddings is orderinvariant we introduce a sum over positionallygated embeddings which conditioned on time step INLINEFORM0 pushes embedding information into the appropriate dimensions More specifically we generate an observation message INLINEFORM1 where INLINEFORM2 is a learned gating vector for time step INLINEFORM3 In a similar fashion we produce action message INLINEFORM4 and send the concatenated vectors INLINEFORM5 as message to the guide We can interpret continuous vector communication as a single monolithic model because its architecture is endtoend differentiable enabling gradientbased optimization for training paragraph4 01ex plus01ex minus1ex1em Discrete symbols Like the continuous vector communication model with discrete communication the tourist also uses separate channels for observations and actions as well as a sum over positionally gated embeddings to generate observation embedding INLINEFORM0 We pass this embedding through a sigmoid and generate a message INLINEFORM1 by sampling from the resulting Bernoulli distributions INLINEFORM0 The action message INLINEFORM0 is produced in the same way and we obtain the final tourist message INLINEFORM1 through concatenating the messages The communication channels sampling operation yields the model nondifferentiable so we use policy gradients BIBREF9 BIBREF10 to train the parameters INLINEFORM0 of the tourist model That is we estimate the gradient by INLINEFORM1 where the reward function INLINEFORM0 is the negative guides loss see Section SECREF25 and INLINEFORM1 a statevalue baseline to reduce variance We use a linear transformation over the concatenated embeddings as baseline prediction ie INLINEFORM2 and train it with a mean squared error loss paragraph4 01ex plus01ex minus1ex1em Natural Language Because observations and actions are of variablelength we use an LSTM encoder over the sequence of observations embeddings INLINEFORM0 and extract its last hidden state INLINEFORM1 We use a separate LSTM encoder for action embeddings INLINEFORM2 and concatenate both INLINEFORM3 and INLINEFORM4 to the input of the LSTM decoder at each time step DISPLAYFORM0 where INLINEFORM0 a lookup table taking input tokens INLINEFORM1 We train with teacherforcing ie we optimize the crossentropy loss INLINEFORM2 At test time we explore the following decoding strategies greedy sampling and a beamsearch We also finetune a trained tourist model starting from a pretrained model with policy gradients in order to minimize the guides prediction loss Given a tourist message INLINEFORM0 describing their observations and actions the objective of the guide is to predict the tourists location on the map First we outline the procedure for extracting observation embedding INLINEFORM1 and action embeddings INLINEFORM2 from the message INLINEFORM3 for each of the types of communication Next we discuss the MASC mechanism that takes the observations and actions in order to ground them on the guides map in order to predict the tourists location paragraph4 01ex plus01ex minus1ex1em Continuous For the continuous communication model we assign the observation message to the observation embedding ie INLINEFORM0 To extract the action embedding for time step INLINEFORM1 we apply a linear layer to the action message ie INLINEFORM2 paragraph4 01ex plus01ex minus1ex1em Discrete For discrete communication we obtain observation INLINEFORM0 by applying a linear layer to the observation message ie INLINEFORM1 Similar to the continuous communication model we use a linear layer over action message INLINEFORM2 to obtain action embedding INLINEFORM3 for time step INLINEFORM4 paragraph4 01ex plus01ex minus1ex1em Natural Language The message INLINEFORM0 contains information about observations and actions so we use a recurrent neural network with attention mechanism to extract the relevant observation and action embeddings Specifically we encode the message INLINEFORM1 consisting of INLINEFORM2 tokens INLINEFORM3 taken from vocabulary INLINEFORM4 with a bidirectional LSTM DISPLAYFORM0 where INLINEFORM0 is the word embedding lookup table We obtain observation embedding INLINEFORM1 through an attention mechanism over the hidden states INLINEFORM2 DISPLAYFORM0 where INLINEFORM0 is a learned control embedding who is updated through a linear transformation of the previous control and observation embedding INLINEFORM1 We use the same mechanism to extract the action embedding INLINEFORM2 from the hidden states For the observation embedding we obtain the final representation by summing positionally gated embeddings ie INLINEFORM3 We represent the guides map as INLINEFORM0 where in this case INLINEFORM1 where each INLINEFORM2 dimensional INLINEFORM3 location embedding INLINEFORM4 is computed as the sum of the guides landmark embeddings for that location paragraph4 01ex plus01ex minus1ex1em Motivation While the guides map representation contains only local landmark information the tourist communicates a trajectory of the map ie actions and observations from multiple locations implying that directly comparing the tourists message with the individual landmark embeddings is probably suboptimal Instead we want to aggregate landmark information from surrounding locations by imputing trajectories over the map to predict locations We propose a mechanism for translating landmark embeddings according to state transitions left right up down which can be expressed as a 2D convolution over the map embeddings For simplicity let us assume that the map embedding INLINEFORM0 is 1dimensional then a left action can be realized through application of the following INLINEFORM1 kernel INLINEFORM2 which effectively shifts all values of INLINEFORM3 one position to the left We propose to learn such statetransitions from the tourist message through a differentiable attentionmask over the spatial dimensions of a 3x3 convolution paragraph4 01ex plus01ex minus1ex1em MASC We linearly project each predicted action embedding INLINEFORM0 to a 9dimensional vector INLINEFORM1 normalize it by a softmax and subsequently reshape the vector into a 3x3 mask INLINEFORM2 DISPLAYFORM0 We learn a 3x3 convolutional kernel INLINEFORM0 with INLINEFORM1 features and apply the mask INLINEFORM2 to the spatial dimensions of the convolution by first broadcasting its values along the feature dimensions ie INLINEFORM3 and subsequently taking the Hadamard product INLINEFORM4 For each action step INLINEFORM5 we then apply a 2D convolution with masked weight INLINEFORM6 to obtain a new map embedding INLINEFORM7 where we zeropad the input to maintain identical spatial dimensions paragraph4 01ex plus01ex minus1ex1em Prediction model We repeat the MASC operation INLINEFORM0 times ie once for each action and then aggregate the map embeddings by a sum over positionallygated embeddings INLINEFORM1 We score locations by taking the dotproduct of the observation embedding INLINEFORM2 which contains information about the sequence of observed landmarks by the tourist and the map We compute a distribution over the locations of the map INLINEFORM3 by taking a softmax over the computed scores DISPLAYFORM0 paragraph4 01ex plus01ex minus1ex1em Predicting T While emergent communication models use a fixed length trasjectory INLINEFORM0 natural language messages may differ in the number of communicated observations and actions Hence we predict INLINEFORM1 from the communicated message Specifically we use a softmax regression layer over the last hidden state INLINEFORM2 of the RNN and subsequently sample INLINEFORM3 from the resulting multinomial distribution DISPLAYFORM0 We jointly train the INLINEFORM0 prediction model via REINFORCE with the guides loss as reward function and a meanreward baseline To better analyze the performance of the models incorporating MASC we compare against a noMASC baseline in our experiments as well as a prediction upper bound paragraph4 01ex plus01ex minus1ex1em No MASC We compare the proposed MASC model with a model that does not include this mechanism Whereas MASC predicts a convolution mask from the tourist message the No MASC model uses INLINEFORM0 the ordinary convolutional kernel to convolve the map embedding INLINEFORM1 to obtain INLINEFORM2 We also share the weights of this convolution at each time step paragraph4 01ex plus01ex minus1ex1em Prediction upperbound Because we have access to the classconditional likelihood INLINEFORM0 we are able to compute the Bayes error rate or irreducible error No model no matter how expressive with any amount of data can ever obtain better localization accuracy as there are multiple locations consistent with the observations and actions In this section we describe the findings of various experiments First we analyze how much information needs to be communicated for accurate localization in the Talk The Walk environment and find that a short random path including actions is necessary Next for emergent language we show that the MASC architecture can achieve very high localization accuracy significantly outperforming the baseline that does not include this mechanism We then turn our attention to the natural language experiments and find that localization from human utterances is much harder reaching an accuracy level that is below communicating a single landmark observation We show that generated utterances from a conditional language model leads to significantly better localization performance by successfully grounding the utterance on a single landmark observation but not yet on multiple observations and actions Finally we show performance of the localization baseline on the full task which can be used for future comparisons to this work paragraph4 01ex plus01ex minus1ex1em Task is not too easy The upperbound on localization performance in Table TABREF32 suggest that communicating a single landmark observation is not sufficient for accurate localization of the tourist INLINEFORM0 35 accuracy This is an important result for the full navigation task because the need for twoway communication disappears if localization is too easy if the guide knows the exact location of the tourist it suffices to communicate a list of instructions which is then executed by the tourist The uncertainty in the tourists location is what drives the dialogue between the two agents paragraph4 01ex plus01ex minus1ex1em Importance of actions We observe that the upperbound for only communicating observations plateaus around 57 even for INLINEFORM0 actions whereas it exceeds 90 when we also take actions into account This implies that at least for random walks it is essential to communicate a trajectory including observations and actions in order to achieve high localization accuracy We first report the results for tourist localization with emergent language in Table TABREF32 paragraph4 01ex plus01ex minus1ex1em MASC improves performance The MASC architecture significantly improves performance compared to models that do not include this mechanism For instance for INLINEFORM0 action MASC already achieves 5609 on the test set and this further increases to 6985 for INLINEFORM1 On the other hand noMASC models hit a plateau at 43 In Appendix SECREF11 we analyze learned MASC values and show that communicated actions are often mapped to corresponding statetransitions paragraph4 01ex plus01ex minus1ex1em Continuous vs discrete We observe similar performance for continuous and discrete emergent communication models implying that a discrete communication channel is not a limiting factor for localization performance We report the results of tourist localization with natural language in Table TABREF36 We compare accuracy of the guide model with MASC trained on utterances from i humans ii a supervised model with various decoding strategies and iii a policy gradient model optimized with respect to the loss of a frozen pretrained guide model on human utterances paragraph4 01ex plus01ex minus1ex1em Human utterances Compared to emergent language localization from human utterances is much harder achieving only INLINEFORM0 on the test set Here we report localization from a single utterance but in Appendix SECREF45 we show that including up to five dialogue utterances only improves performance to INLINEFORM1 We also show that MASC outperform noMASC models for natural language communication paragraph4 01ex plus01ex minus1ex1em Generated utterances We also investigate generated tourist utterances from conditional language models Interestingly we observe that the supervised model with greedy and beamsearch decoding as well as the policy gradient model leads to an improvement of more than 10 accuracy points over the human utterances However their level of accuracy is slightly below the baseline of communicating a single observation indicating that these models only learn to ground utterances in a single landmark observation paragraph4 01ex plus01ex minus1ex1em Better grounding of generated utterances We analyze natural language samples in Table TABREF38 and confirm that unlike human utterances the generated utterances are talking about the observed landmarks This observation explains why the generated utterances obtain higher localization accuracy The current language models are most successful when conditioned on a single landmark observation We show in Appendix UID43 that performance quickly deteriorates when the model is conditioned on more observations suggesting that it can not produce natural language utterances about multiple time steps Table TABREF36 shows results for the best localization models on the full task evaluated via the random walk protocol defined in Algorithm SECREF12 paragraph4 01ex plus01ex minus1ex1em Comparison with human annotators Interestingly our best localization model continuous communication with MASC and INLINEFORM0 achieves 8833 on the test set and thus exceed human performance of 7674 on the full task While emergent models appear to be stronger localizers humans might cope with their localization uncertainty through other mechanisms eg better guidance bias towards taking particular paths etc The simplifying assumption of perfect perception also helps paragraph4 01ex plus01ex minus1ex1em Number of actions Unsurprisingly humans take fewer steps roughly 15 than our best random walk model roughly 34 Our human annotators likely used some form of guidance to navigate faster to the target We introduced the Talk The Walk task and dataset which consists of crowdsourced dialogues in which two human annotators collaborate to navigate to target locations in the virtual streets of NYC For the important localization subtask we proposed MASCa novel grounding mechanism to learn statetransition from the tourists messageand showed that it improves localization performance for emergent and natural language We use the localization model to provide baseline numbers on the Talk The Walk task in order to facilitate future research The Talk the Walk task and dataset facilitate future research on various important subfields of artificial intelligence including grounded language learning goaloriented dialogue research and situated navigation Here we describe related previous work in these areas paragraph4 01ex plus01ex minus1ex1em Related tasks There has been a long line of work involving related tasks Early work on taskoriented dialogue dates back to the early 90s with the introduction of the Map Task BIBREF11 and Maze Game BIBREF25 corpora Recent efforts have led to largerscale goaloriented dialogue datasets for instance to aid research on visuallygrounded dialogue BIBREF2 BIBREF1 knowledgebasegrounded discourse BIBREF29 or negotiation tasks BIBREF36 At the same time there has been a big push to develop environments for embodied AI many of which involve agents following natural language instructions with respect to an environment BIBREF13 BIBREF50 BIBREF5 BIBREF39 BIBREF19 BIBREF18 followingup on early work in this area BIBREF38 BIBREF20 An early example of navigation using neural networks is BIBREF28 who propose an online learning approach for robot navigation Recently there has been increased interest in using endtoend trainable neural networks for learning to navigate indoor scenes BIBREF27 BIBREF26 or large cities BIBREF17 BIBREF40 but unlike our work without multiagent communication Also the task of localization without multiagent communication has recently been studied BIBREF18 BIBREF48 paragraph4 01ex plus01ex minus1ex1em Grounded language learning Grounded language learning is motivated by the observation that humans learn language embodied grounded in sensorimotor experience of the physical world BIBREF15 BIBREF45 On the one hand work in multimodal semantics has shown that grounding can lead to practical improvements on various natural language understanding tasks BIBREF14 BIBREF31 In robotics researchers dissatisfied with purely symbolic accounts of meaning attempted to build robotic systems with the aim of grounding meaning in physical experience of the world BIBREF44 BIBREF46 Recently grounding has also been applied to the learning of sentence representations BIBREF32 image captioning BIBREF37 BIBREF49 visual question answering BIBREF12 BIBREF22 visual reasoning BIBREF30 BIBREF42 and grounded machine translation BIBREF43 BIBREF23 Grounding also plays a crucial role in the emergent research of multiagent communication where agents communicate in natural language or otherwise in order to solve a task with respect to their shared environment BIBREF35 BIBREF21 BIBREF41 BIBREF24 BIBREF36 BIBREF47 BIBREF34 For the emergent communication models we use an embedding size INLINEFORM0 The natural language experiments use 128dimensional word embeddings and a bidirectional RNN with 256 units In all experiments we train the guide with a cross entropy loss using the ADAM optimizer with default hyperparameters BIBREF33 We perform early stopping on the validation accuracy and report the corresponding train valid and test accuracy We optimize the localization models with continuous discrete and natural language communication channels for 200 200 and 25 epochs respectively To facilitate further research on Talk The Walk we make our code base for reproducing experiments publicly available at httpsgithubcomfacebookresearchtalkthewalk First we investigate the sensitivity of tourist generation models to the trajectory length finding that the model conditioned on a single observation ie INLINEFORM0 achieves best performance In the next subsection we further analyze localization models from human utterances by investigating MASC and noMASC models with increasing dialogue context After training the supervised tourist model conditioned on observations and action from human expert trajectories there are two ways to train an accompanying guide model We can optimize a location prediction model on either i extracted human trajectories as in the localization setup from human utterances or ii on all random paths of length INLINEFORM0 as in the full task evaluation Here we investigate the impact of 1 using either human or random trajectories for training the guide model and 2 the effect of varying the path length INLINEFORM1 during the fulltask evaluation For random trajectories guide training uses the same path length INLINEFORM2 as is used during evaluation We use a pretrained tourist model with greedy decoding for generating the tourist utterances Table TABREF40 summarizes the results paragraph4 01ex plus01ex minus1ex1em Human vs random trajectories We only observe small improvements for training on random trajectories Human trajectories are thus diverse enough to generalize to random trajectories paragraph4 01ex plus01ex minus1ex1em Effect of path length There is a strong negative correlation between task success and the conditioned trajectory length We observe that the full task performance quickly deteriorates for both human and random trajectories This suggests that the tourist generation model can not produce natural language utterances that describe multiple observations and actions Although it is possible that the guide model can not process such utterances this is not very likely because the MASC architectures handles such messages successfully for emergent communication We report localization performance of tourist utterances generated by beam search decoding of varying beam size in Table TABREF40 We find that performance decreases from 2905 to 2087 accuracy on the test set when we increase the beamsize from one to eight We conduct an ablation study for MASC on natural language with varying dialogue context Specifically we compare localization accuracy of MASC and noMASC models trained on the last 1 3 5 utterances of the dialogue including guide utterances We report these results in Table TABREF41 In all cases MASC outperforms the noMASC models by several accuracy points We also observe that mean predicted INLINEFORM0 over the test set increases from 1 to 2 when more dialogue context is included Figure FIGREF46 shows the MASC values for a learned model with emergent discrete communications and INLINEFORM0 actions Specifically we look at the predicted MASC values for different action sequences taken by the tourist We observe that the first action is always mapped to the correct statetransition but that the second and third MASC values do not always correspond to right statetransitions We provide pseudocode for evaluation of localization models on the full task in Algorithm SECREF12 as well as results for all emergent communication models in Table TABREF55 INLINEFORM0 INLINEFORM1 INLINEFORM0 take new action INLINEFORM1 INLINEFORM2 Performance evaluation of location prediction model on full Talk The Walk setup While the guide has access to the landmark labels the tourist needs to recognize these landmarks from raw perceptual information In this section we study landmark classification as a supervised learning problem to investigate the difficulty of perceptual grounding in Talk The Walk The Talk The Walk dataset contains a total of 307 different landmarks divided among nine classes see Figure FIGREF62 for how they are distributed The class distribution is fairly imbalanced with shops and restaurants as the most frequent landmarks and relatively few play fields and theaters We treat landmark recognition as a multilabel classification problem as there can be multiple landmarks on a corner For the task of landmark classification we extract the relevant views of the 360 image from which a landmark is visible Because landmarks are labeled to be on a specific corner of an intersection we assume that they are visible from one of the orientations facing away from the intersection For example for a landmark on the northwest corner of an intersection we extract views from both the north and west direction The orientationspecific views are obtained by a planar projection of the full 360image with a small field of view 60 degrees to limit distortions To cover the full field of view we extract two images per orientation with their horizontal focus point 30 degrees apart Hence we obtain eight images per 360 image with corresponding orientation INLINEFORM0 We run the following pretrained feature extractors over the extracted images For the text recognition model we use a learned lookup table INLINEFORM0 to embed the extracted text features INLINEFORM1 and fuse all embeddings of four images through a bag of embeddings ie INLINEFORM2 We use a linear layer followed by a sigmoid to predict the probability for each class ie INLINEFORM3 We also experiment with replacing the lookup embeddings with pretrained FastText embeddings BIBREF16 For the ResNet model we use a bag of embeddings over the four ResNet features ie INLINEFORM4 before we pass it through a linear layer to predict the class probabilities INLINEFORM5 We also conduct experiments where we first apply PCA to the extracted ResNet and FastText features before we feed them to the model To account for class imbalance we train all described models with a binary cross entropy loss weighted by the inverted class frequency We create a 8020 classconditional split of the dataset into a training and validation set We train for 100 epochs and perform early stopping on the validation loss The F1 scores for the described methods in Table TABREF65 We compare to an all positive baseline that always predicts that the landmark class is visible and observe that all presented models struggle to outperform this baseline Although 256dimensional ResNet features achieve slightly better precision on the validation set it results in much worse recall and a lower F1 score Our results indicate that perceptual grounding is a difficult task which easily merits a paper of its own right and so we leave further improvements eg better text recognizers for future work paragraph4 01ex plus01ex minus1ex1em Dataset split We split the full dataset by assigning entire 4x4 grids independent of the target location to the train valid or test set Specifically we design the split such that the valid set contains at least one intersection out of four is not part of the train set For the test set all four intersections are novel See our source code available at URL ANONYMIZED for more details on how this split is realized paragraph4 01ex plus01ex minus1ex1em Example Tourist ACTIONTURNRIGHT ACTIONTURNRIGHT Guide Hello what are you near Tourist ACTIONTURNLEFT ACTIONTURNLEFT ACTIONTURNLEFT Tourist Hello in front of me is a Brooks Brothers Tourist ACTIONTURNLEFT ACTIONFORWARD ACTIONTURNLEFT ACTIONTURNLEFT Guide Is that a shop or restaurant Tourist ACTIONTURNLEFT Tourist It is a clothing shop Tourist ACTIONTURNLEFT Guide You need to go to the intersection in the northwest corner of the map Tourist ACTIONTURNLEFT Tourist There appears to be a bank behind me Tourist ACTIONTURNLEFT ACTIONTURNLEFT ACTIONTURNRIGHT ACTIONTURNRIGHT Guide Ok turn left then go straight up that road Tourist ACTIONTURNLEFT ACTIONTURNLEFT ACTIONTURNLEFT ACTIONFORWARD ACTIONTURNRIGHT ACTIONFORWARD ACTIONFORWARD ACTIONTURNLEFT ACTIONTURNLEFT ACTIONTURNLEFT Guide There should be shops on two of the corners but you need to go to the corner without a shop Tourist ACTIONFORWARD ACTIONFORWARD ACTIONFORWARD ACTIONTURNLEFT ACTIONTURNLEFT Guide let me know when you get there Tourist on my left is Radio city Music hall Tourist ACTIONTURNLEFT ACTIONFORWARD ACTIONTURNLEFT ACTIONTURNRIGHT ACTIONTURNRIGHT Tourist I cant go straight any further Guide ok turn so that the theater is on your right Guide then go straight Tourist That would be going back the way I came Guide yeah I was looking at the wrong bank Tourist Ill notify when I am back at the brooks brothers and the bank Tourist ACTIONTURNRIGHT Guide make a right when the bank is on your left Tourist ACTIONFORWARD ACTIONFORWARD ACTIONTURNRIGHT Tourist Making the right at the bank Tourist ACTIONFORWARD ACTIONFORWARD Tourist I cant go that way Tourist ACTIONTURNLEFT Tourist Bank is ahead of me on the right Tourist ACTIONFORWARD ACTIONFORWARD ACTIONTURNLEFT Guide turn around on that intersection Tourist I can only go to the left or back the way I just came Tourist ACTIONTURNLEFT Guide youre in the right place do you see shops on the corners Guide If youre on the corner with the bank cross the street Tourist Im back where I started by the shop and the bank Tourist ACTIONTURNRIGHT Guide on the same side of the street Tourist crossing the street now Tourist ACTIONFORWARD ACTIONFORWARD ACTIONTURNLEFT Tourist there is an I love new york shop across the street on the left from me now Tourist ACTIONTURNRIGHT ACTIONFORWARD Guide ok Ill see if its right Guide EVALUATELOCATION Guide Its not right Tourist What should I be on the look for Tourist ACTIONTURNRIGHT ACTIONTURNRIGHT ACTIONTURNRIGHT Guide There should be shops on two corners but you need to be on one of the corners without the shop Guide Try the other corner Tourist this intersection has 2 shop corners and a bank corner Guide yes thats what I see on the map Tourist should I go to the bank corner or one of the shop corners or the blank corner perhaps a hotel Tourist ACTIONTURNLEFT ACTIONTURNLEFT ACTIONTURNRIGHT ACTIONTURNRIGHT Guide Go to the one near the hotel The map says the hotel is a little further down but it might be a little off Tourist Its a big hotel its possible Tourist ACTIONFORWARD ACTIONTURNLEFT ACTIONFORWARD ACTIONTURNRIGHT Tourist Im on the hotel corner Guide EVALUATELOCATION </s>
<s>[INST] Real-time Claim Detection from News Articles and Retrieval of Semantically-Similar Factchecks [/INST] In recent years the spread of misinformation has become a growing concern for researchers and the public at large BIBREF1 Researchers at MIT found that social media users are more likely to share false information than true information BIBREF2 Due to renewed focus on finding ways to foster healthy political conversation the profile of factcheckers has been raised Factcheckers positively influence public debate by publishing good quality information and asking politicians and journalists to retract misleading or false statements By calling out lies and the blurring of the truth they make those in positions of power accountable This is a result of labour intensive work that involves monitoring the news for spurious claims and carrying out rigorous research to judge credibility So far it has only been possible to scale their output upwards by hiring more personnel This is problematic because newsrooms need significant resources to employ factcheckers Publication budgets have been decreasing resulting in a steady decline in the size of their workforce BIBREF0 Factchecking is not a directly profitable activity which negatively affects the allocation of resources towards it in forprofit organisations It is often taken on by charities and philanthropists instead To compensate for this shortfall our strategy is to harness the latest developments in NLP to make factchecking more efficient and therefore less costly To this end the new field of automated factchecking has captured the imagination of both nonprofits and startups BIBREF3 BIBREF4 BIBREF5 It aims to speed up certain aspects of the factchecking process rather than create AI that can replace factchecking personnel This includes monitoring claims that are made in the news aiding decisions about which statements are the most important to check and automatically retrieving existing factchecks that are relevant to a new claim The claim detection and claim clustering methods that we set out in this paper can be applied to each of these We sought to devise a system that would automatically detect claims in articles and compare them to previously submitted claims Storing the results to allow a factcheckers work on one of these claims to be easily transferred to others in the same cluster It is important to decide what sentences are claims before attempting to cluster them The first such claim detection system to have been created is ClaimBuster BIBREF6 which scores sentences with an SVM to determine how likely they are to be politically pertinent statements Similarly ClaimRank BIBREF7 uses real claims checked by factchecking institutions as training data in order to surface sentences that are worthy of factchecking These methods deal with the question of what is a politically interesting claim In order to classify the objective qualities of what set apart different types of claims the ClaimBuster team created PolitiTax BIBREF8 a taxonomy of claims and factchecking organisation Full Fact BIBREF9 developed their preferred annotation schema for statements in consultation with their own factcheckers This research provides a more solid framework within which to construct claim detection classifiers The above considers whether or not a sentence is a claim but often claims are subsections of sentences and multiple claims might be found in one sentence In order to accommodate this BIBREF10 proposes extracting phrases called Context Dependent Claims CDC that are relevant to a certain Topic Along these lines BIBREF11 proposes new definitions for frames to be incorporated into FrameNet BIBREF12 that are specific to facts in particular those found in a political context Traditional text clustering methods using TFIDF and some clustering algorithm are poorly suited to the problem of clustering and comparing short texts as they can be semantically very similar but use different words This is a manifestation of the the data sparsity problem with BagofWords BoW models BIBREF16 Dimensionality reduction methods such as Latent Dirichlet Allocation LDA can help solve this problem by giving a dense approximation of this sparse representation BIBREF17 More recently efforts in this area have used text embeddingbased systems in order to capture dense representation of the texts BIBREF18 Much of this recent work has relied on the increase of focus in word and text embeddings Text embeddings have been an increasingly popular tool in NLP since the introduction of Word2Vec BIBREF19 and since then the number of different embeddings has exploded While many focus on giving a vector representation of a word an increasing number now exist that will give a vector representation of a entire sentence or text Following on from this work we seek to devise a system that can run online performing text clustering on the embeddings of texts one at a time Some considerations to bear in mind when deciding on an embedding scheme to use are the size of the final vector the complexity of the model itself and if using a pretrained implementation the data the model has been trained on and whether it is trained in a supervised or unsupervised manner The size of the embedding can have numerous results downstream In our example we will be doing distance calculations on the resultant vectors and therefore any increase in length will increase the complexity of those distance calculations We would therefore like as short a vector as possible but we still wish to capture all salient information about the claim longer vectors have more capacity to store information both salient and nonsalient A similar effect is seen for the complexity of the model A more complicated model with more trainable parameters may be able to capture finer details about the text but it will require a larger corpus to achieve this and will require more computational time to calculate the embeddings We should therefore attempt to find the simplest embedding system that can accurately solve our problem When attempting to use pretrained models to help in other areas it is always important to ensure that the models you are using are trained on similar material to increase the chance that their findings will generalise to the new problem Many unsupervised text embeddings are trained on the CommonCrawl dataset of approx 840 billion tokens This gives a huge amount of data across many domains but requires a similarly huge amount of computing power to train on the entire dataset Supervised datasets are unlikely ever to approach such scale as they require human annotations which can be expensive to assemble The SNLI entailment dataset is an example of a large open source dataset BIBREF20 It features pairs of sentences along with labels specifying whether or not one entails the other Googles Universal Sentence Encoder USE BIBREF14 is a sentence embedding created with a hybrid supervisedunsupervised method leveraging both the vast amounts of unsupervised training data and the extra detail that can be derived from a supervised method The SNLI dataset and the related MultiNLI dataset are often used for this because textual entailment is seen as a good basis for general Natural Language Understanding NLU BIBREF21 It is much easier to build a dataset and reliably evaluate a model if the starting definitions are clear and objective Questions around what is an interesting or pertinent claim are inherently subjective For example it is obvious that a politician will judge their opponents claims to be more important to factcheck than their own Therefore we built on the methodologies that dealt with the objective qualities of claims which were the PolitiTax and Full Fact taxonomies We annotated sentences from our own database of news articles based on a combination of these We also used the Full Fact definition of a claim as a statement about the world that can be checked Some examples of claims according to this definition are shown in Table TABREF3 We decided the first statement was a claim since it declares the occurrence of an event while the second was considered not to be a claim as it is an expression of feeling Full Facts approach centred around using sentence embeddings as a feature engineering step followed by a simple classifier such as logistic regression which is what we used They used Facebooks sentence embeddings InferSent BIBREF13 which was a recent breakthrough at the time Such is the speed of new development in the field that since then several papers describing textual embeddings have been published Due to the fact that we had already evaluated embeddings for clustering and therefore knew our system would rely on Google USE Large BIBREF14 we decided to use this instead We compared this to TFIDF and Full Facts results as baselines The results are displayed in Table TABREF4 However ClaimBuster and Full Fact focused on live factchecking of TV debates Logically is a news aggregator and we analyse the bodies of published news stories We found that in our corpus the majority of sentences are claims and therefore our model needed to be as selective as possible In practice we choose to filter out sentences that are predictions since generally the substance of the claim cannot be fully checked until after the event has occurred Likewise we try to remove claims based on personal experience or anecdotal evidence as they are difficult to verify In order to choose an embedding we sought a dataset to represent our problem Although no perfect matches exist we decided upon the Quora duplicate question dataset BIBREF22 as the best match To study the embeddings we computed the euclidean distance between the two questions using various embeddings to study the distance between semantically similar and dissimilar questions The graphs in figure 1 show the distances between duplicate and nonduplicate questions using different embedding systems The X axis shows the euclidean distance between vectors and the Y axis frequency A perfect result would be a blue peak to the left and an entirely disconnected orange spike to the right showing that all nonduplicate questions have a greater euclidean distance than the least similar duplicate pair of questions As can be clearly seen in the figure above Elmo BIBREF23 and Infersent BIBREF13 show almost no separation and therefore cannot be considered good models for this problem A much greater disparity is shown by the Google USE models BIBREF14 and even more for the Google USE Large model In fact the Google USE Large achieved a F1 score of 071 for this task without any specific training simply by choosing a threshold below which all sentence pairs are considered duplicates In order to test whether these results generalised to our domain we devised a test that would make use of what little data we had to evaluate We had no original data on whether sentences were semantically similar but we did have a corpus of articles clustered into stories Working on the assumption that similar claims would be more likely to be in the same story we developed an equation to judge how well our corpus of sentences was clustered rewarding clustering which matches the article clustering and the total number of claims clustered The precise formula is given below where INLINEFORM0 is the proportion of claims in clusters from one story cluster INLINEFORM1 is the proportion of claims in the correct claim cluster where they are from the most common story cluster and INLINEFORM2 is the number of claims placed in clusters AB and C are parameters to tune INLINEFORM3 figureFormula to assess the correctness of claim clusters based on article clusters This method is limited in how well it can represent the problem but it can give indications as to a good or bad clustering method or embedding and can act as a check that the findings we obtained from the Quora dataset will generalise to our domain We ran code which vectorized 2000 sentences and then used the DBScan clustering method BIBREF24 to cluster using a grid search to find the best INLINEFORM0 value maximizing this formula We used DBScan as it mirrored the clustering method used to derive the original article clusters The results for this experiment can be found in Table TABREF10 We included TFIDF in the experiment as a baseline to judge other results It is not suitable for our eventual purposes but it the basis of the original keywordbased model used to build the clusters That being said TFIDF performs very well with only Google USE Large and Infersent coming close in terms of accuracy In the case of Infersent this comes with the penalty of a much smaller number of claims included in the clusters Google USE Large however clusters a greater number and for this reason we chose to use Googles USE Large Since Google USE Large was the bestperforming embedding in both the tests we devised this was our chosen embedding to use for clustering However as can be seen from the results shown above this is not a perfect solution and the inaccuracy here will introduce inaccuracy further down the clustering pipeline We decided to follow a methodology upon the DBScan method of clustering BIBREF24 DBScan considers all distances between pairs of points If they are under INLINEFORM0 then those two are linked Once the number of connected points exceeds a minimum size threshold they are considered a cluster and all other points are considered to be unclustered This method is advantageous for our purposes because unlike other methods such as KMeans it does not require the number of clusters to be specified To create a system that can build clusters dynamically adding one point at a time we set the minimum cluster size to one meaning that every point is a member of a cluster A potential disadvantage of this method is that because points require only one connection to a cluster to join it they may only be related to one point in the cluster but be considered in the same cluster as all of them In small examples this is not a problem as all points in the cluster should be very similar However as the number of points being considered grows this behaviour raises the prospect of one or several borderline clustering decisions leading to massive clusters made from tenuous connections between genuine clusters To mitigate this problem we used a method described in the Newslens paper BIBREF25 to solve a similar problem when clustering entire articles We stored all of our claims in a graph with the connections between them added when the distance between them was determined to be less than INLINEFORM0 To determine the final clusters we run a Louvain Community Detection BIBREF26 over this graph to split it into defined communities This improved the compactness of a cluster When clustering claims one by one this algorithm can be performed on the connected subgraph featuring the new claim to reduce the computation required As this method involves distance calculations between the claim being added and every existing claim the time taken to add one claim will increase roughly linearly with respect to the number of previous claims Through much optimization we have brought the computational time down to approximately 300ms per claim which stays fairly static with respect to the number of previous claims The clustering described above is heavily dependent on the embedding used The rate of advances in this field has been rapid in recent years but an embedding will always be an imperfect representation of an claim and therefore always an area of improvement A domain specificembedding will likely offer a more accurate representation but creates problems with clustering claims from different domains They also require a huge amount of data to give a good model and that is not possible in all domains </s>
<s>[INST] RC-QED: Evaluating Natural Language Derivations in Multi-Hop Reading Comprehension [/INST] Reading comprehension RC has become a key benchmark for natural language understanding NLU systems and a large number of datasets are now available BIBREF0 BIBREF1 BIBREF2 However these datasets suffer from annotation artifacts and other biases which allow systems to cheat Instead of learning to read texts systems learn to exploit these biases and find answers via simple heuristics such as looking for an entity with a matching semantic type BIBREF3 BIBREF4 To give another example many RC datasets contain a large number of easy problems that can be solved by looking at the first few words of the question Sugawara2018 In order to provide a reliable measure of progress an RC dataset thus needs to be robust to such simple heuristics Towards this goal two important directions have been investigated One direction is to improve the dataset itself for example so that it requires an RC system to perform multihop inferences BIBREF0 or to generate answers BIBREF1 Another direction is to request a system to output additional information about answers Yang2018HotpotQAAnswering propose HotpotQA an explainable multihop Question Answering QA task that requires a system to identify a set of sentences containing supporting evidence for the given answer We follow the footsteps of Yang2018HotpotQAAnswering and explore an explainable multihop QA task In the community two important types of explanations have been explored so far BIBREF5 i introspective explanation how a decision is made and ii justification explanation collections of evidences to support the decision In this sense supporting facts in HotpotQA can be categorized as justification explanations The advantage of using justification explanations as benchmark is that the task can be reduced to a standard classification task which enables us to adopt standard evaluation metrics eg a classification accuracy However this task setting does not evaluate a machines ability to i extract relevant information from justification sentences and ii synthesize them to form coherent logical reasoning steps which are equally important for NLU To address this issue we propose RCQED an RC task that requires not only the answer to a question but also an introspective explanation in the form of a natural language derivation NLD For example given the question Which record company released the song Barracuda and supporting documents shown in Figure FIGREF1 a system needs to give the answer Portrait Records and to provide the following NLD 1 Barracuda is on Little Queen and 2 Little Queen was released by Portrait Records The main difference between our work and HotpotQA is that they identify a set of sentences lbrace s2s4rbrace while RCQED requires a system to generate its derivations in a correct order This generation task enables us to measure a machines logical reasoning ability mentioned above Due to its subjective nature of the natural language derivation task we evaluate the correctness of derivations generated by a system with multiple reference answers Our contributions can be summarized as follows We create a large corpus consisting of 12000 QA pairs and natural language derivations The developed crowdsourcing annotation framework can be used for annotating other QA datasets with derivations Through an experiment using two baseline models we highlight several challenges of RCQED We will make the corpus of reasoning annotations and the baseline system publicly available at httpsnaoyaigithubiorcqed We formally define RCQED as follows Given i a question Q and ii a set S of supporting documents relevant to Q Find i answerability s in lbrace textsf Answerable textsf Unanswerable rbrace ii an answer a and iii a sequence R of derivation steps We evaluate each prediction with the following evaluation metrics Answerability Correctness of models decision on answerability ie binary classification task evaluated by PrecisionRecallF1 Answer precision Correctness of predicted answers for Answerable predictions only We follow the standard practice of RC community for evaluation eg an accuracy in the case of multiple choice QA Derivation precision Correctness of generated NLDs evaluated by ROUGEL BIBREF6 RGL and BLEU4 BL4 BIBREF7 We follow the standard practice of evaluation for natural language generation BIBREF1 Derivation steps might be subjective so we resort to multiple reference answers This paper instantiates RCQED by employing multiple choice entitybased multihop QA BIBREF0 as a testbed henceforth RCQEDrm E In entitybased multihop QA machines need to combine relational facts between entities to derive an answer For example in Figure FIGREF1 understanding the facts about Barracuda Little Queen and Portrait Records stated in each article is required This design choice restricts a problem domain but it provides interesting challenges as discussed in Section SECREF46 In addition such entitybased chaining is known to account for the majority of reasoning types required for multihop reasoning BIBREF2 More formally given i a question Qr q represented by a binary relation r and an entity q question entity ii relevant articles S and iii a set C of candidate entities systems are required to output i an answerability s in lbrace textsf Answerable textsf Unanswerable rbrace ii an entity e in C answer entity that q r e holds and iii a sequence R of derivation steps as to why e is believed to be an answer We define derivation steps as an m chain of relational facts to derive an answer ie q r1 e1 e1 r2 e2 em1 rm1 em em rm em1 Although we restrict the form of knowledge to entity relations we use a natural language form to represent ri rather than a closed vocabulary see Figure FIGREF1 for an example To acquire a largescale corpus of NLDs we use crowdsourcing CS Although CS is a powerful tool for largescale dataset creation BIBREF2 BIBREF8 quality control for complex tasks is still challenging We thus carefully design an incentive structure for crowdworkers following Yang2018HotpotQAAnswering Initially we provide crowdworkers with an instruction with example annotations where we emphasize that they judge the truth of statements solely based on given articles not based on their own knowledge Given a statement and articles workers are asked to judge whether the statement can be derived from the articles at three grades True Likely ie Answerable or Unsure ie Unanswerable If a worker selects Unsure we ask workers to tell us why they are unsure from two choices Not stated in the article or Other If a worker selects True or Likely in the judgement task we first ask which sentences in the given articles are justification explanations for a given statement similarly to HotpotQA BIBREF2 The summary text boxes ie NLDs are then initialized with these selected sentences We give a 6 bonus to those workers who select True or Likely To encourage an abstraction of selected sentences we also introduce a gamification scheme to give a bonus to those who provide shorter NLDs Specifically we probabilistically give another 14 bonus to workers according to a score they gain The score is always shown on top of the screen and changes according to the length of NLDs they write in real time To discourage noisy annotations we also warn crowdworkers that their work would be rejected for noisy submissions We periodically run simple filtering to exclude noisy crowdworkers eg workers who give more than 50 submissions with the same answers We deployed the task on Amazon Mechanical Turk AMT To see how reasoning varies across workers we hire 3 crowdworkers per one instance We hire reliable crowdworkers with ge 5000 HITs experiences and an approval rate of ge 990 and pay 20 as a reward per instance Our data collection pipeline is expected to be applicable to other types of QAs other than entitybased multihop QA without any significant extensions because the interface is not specifically designed for entitycentric reasoning Our study uses WikiHop BIBREF0 as it is an entitybased multihop QA dataset and has been actively used We randomly sampled 10000 instances from 43738 training instances and 2000 instances from 5129 validation instances ie 36000 annotation tasks were published on AMT We manually converted structured WikiHop questionanswer pairs eg locatedInMacchu Picchu Peru into natural language statements Macchu Picchu is located in Peru using a simple conversion dictionary We use supporting documents provided by WikiHop WikiHop collects supporting documents by finding Wikipedia articles that bridges a question entity ei and an answer entity ej where the link between articles is given by a hyperlink Table TABREF17 shows the statistics of responses and example annotations Table TABREF17 also shows the abstractiveness of annotated NLDs a namely the number of tokens in an NLD divided by the number of tokens in its corresponding justification sentences This indicates that annotated NLDs are indeed summarized See Table TABREF53 in Appendix and Supplementary Material for more results To evaluate the quality of annotation results we publish another CS task on AMT We randomly sample 300 True and Likely responses in this evaluation Given NLDs and a statement 3 crowdworkers are asked if the NLDs can lead to the statement at four scale levels If the answer is 4 or 3 yes or likely we additionally asked whether each derivation step can be derived from each supporting document otherwise we asked them the reasons For a fair evaluation we encourage crowdworkers to annotate given NLDs with a lower score by stating that we give a bonus if they found a flaw of reasoning on the CS interface The evaluation results shown in Table TABREF24 indicate that the annotated NLDs are of high quality Reachability and each NLD is properly derived from supporting documents Derivability On the other hand we found the quality of 3step NLDs is relatively lower than the others Crowdworkers found that 453 of 294 out of 900 3step NLDs has missing steps to derive a statement Let us consider this example for annotated NLDs 1 Kouvola is located in Helsinki 2 Helsinki is in the region of Uusimaa 3 Uusimaa borders the regions Southwest Finland Kymenlaakso and some others and for the statement Kouvola is located in Kymenlaakso one worker pointed out the missing step Uusimaa is in Kymenlaakso We speculate that greater steps of reasoning make it difficult for crowdworkers to check the correctness of derivations during the writing task For agreement on the number of NLDs we obtained a Krippendorffs alpha of 0223 indicating a fair agreement BIBREF9 Our manual inspection of the 10 worst disagreements revealed that majority 710 come from Unsure vs nonUnsure It also revealed that crowdworkers who labeled nonUnsure are reliable6 out 7 nonUnsure annotations can be judged as correct This partially confirms the effectiveness of our incentive structure To highlight the challenges and nature of RCQEDrm E we create a simple transparent and interpretable baseline model Recent studies on knowledge graph completion KGC explore compositional inferences to combat with the sparsity of knowledge bases BIBREF10 BIBREF11 BIBREF12 Given a query triplet h r t eg Macchu Picchu locatedIn Peru a path rankingbased approach for KGC explicitly samples paths between h and t in a knowledge base eg Macchu PicchulocatedInAndes MountaincountryOfPeru and construct a feature vector of these paths This feature vector is then used to calculate the compatibility between the query triplet and the sampled paths RCQEDrm E can be naturally solved by path rankingbased KGC PRKGC where the query triplet and the sampled paths correspond to a question and derivation steps respectively PRKGC meets our purposes because of its glassboxness we can trace the derivation steps of the model easily Given supporting documents S we build a knowledge graph We first apply a coreference resolver to S and then create a directed graph GS Therein each node represents named entities NEs in S and each edge represents textual relations between NEs extracted from S Figure FIGREF27 illustrates an example of GS constructed from supporting documents in Figure FIGREF1 Given a question Qq r and a candidate entity ci we estimate the plausibility of q r ci as follows where sigma is a sigmoid function and mathbf q r ci mathbf pi q ci are vector representations of q r ci and a set pi q ci of shortest paths between q and ci on GS rm MLPcdot cdot denotes a multilayer perceptron To encode entities into vectors mathbf q ci we use LongShort Term Memory LSTM and take its last hidden state For example in Figure FIGREF27 q Barracuda and ci Portrait Records yield pi q ci lbrace Barracudais the most popular in their albumLittle Queenwas released in May 1977 onPortrait Records Barracudawas released from American band Heartis the second album released by1Little Queenwas released in May 1977 onPortrait Recordsrbrace To obtain path representations mathbf pi q ci we attentively aggregate individual path representations mathbf pi q ci sum j alpha j mathbf pi jq ci where alpha j is an attention for the jth path The attention values are calculated as follows alpha j exp rm scq r ci pi j sum k exp rm scq r ci pi k where rm scq r ci pi j rm MLPmathbf q mathbf r mathbf ci mathbf pi j To obtain individual path representations mathbf pi j we follow toutanovaetal2015representing We use a BiLSTM BIBREF13 with mean pooling over timestep in order to encourage similar paths to have similar path representations For the testing phase we choose a candidate entity ci with the maximum probability Prq ci as an answer entity and choose a path pi j with the maximum attention value alpha j as NLDs To generate NLDs we simply traverse the path from q to ci and subsequently concatenate all entities and textual relations as one string We output Unanswerable when i max ci in C Prq ci epsilon k or ii GS has no path between q and all ci in C Let mathcal K be a set of questionanswer pairs where each instance consists of a triplet a query entity qi a relation ri an answer entity ai Similarly let mathcal K be a set of questionnonanswer pairs We minimize the following binary crossentropy loss From the NLD point of view this is unsupervised training The model is expected to learn the score function rm sccdot to give higher scores to paths ie NLD steps that are useful for discriminating correct answers from wrong answers by its own Highly scored NLDs might be useful for answer classification but these are not guaranteed to be interpretable to humans To address the above issue we resort to goldstandard NLDs to guide the path scoring function rm sccdot Let mathcal D be questionanswer pairs coupled with goldstandard NLDs namely a binary vector mathbf pi where the jth value represents whether jth path corresponds to a goldstandard NLD 1 or not 0 We apply the following crossentropy loss to the path attention We aggregated crowdsourced annotations obtained in Section SECREF3 As a preprocessing we converted the NLD annotation to Unsure if the derivation contains the phrase needs to be mentioned This is due to the fact that annotators misunderstand our instruction When at least one crowdworker state that a statement is Unsure then we set the answerability to Unanswerable and discard NLD annotations Otherwise we employ all NLD annotations from workers as multiple reference NLDs The statistics is shown in Table TABREF36 Regarding mathcal K mathcal K we extracted 867936 instances from the training set of WikiHop BIBREF0 We reserve 10 of these instances as a validation set to find the best model For mathcal D we used Answerable questions in the training set To create supervision of path ie mathbf pi we selected the path that is most similar to all NLD annotations in terms of ROUGEL F1 We used 100dimensional vectors for entities relations and textual relation representations We initialize these representations with 100dimensional Glove Embeddings BIBREF14 and finetuned them during training We retain only top100000 frequent words as a model vocabulary We used BiLSTM with 50 dimensional hidden state as a textual relation encoder and an LSTM with 100dimensional hidden state as an entity encoder We used the Adam optimizer default parameters BIBREF15 with a batch size of 32 We set the answerability threshold epsilon k 05 To check the integrity of the PRKGC model we created a simple baseline model shortest path model It outputs a candidate entity with the shortest path length from a query entity on GS as an answer Similarly to the PRKGC model it traverses the path to generate NLDs It outputs Unanswerable if i a query entity is not reachable to any candidate entities on GS or ii the shortest path length is more than 3 As shown in Table TABREF37 the PRKGC models learned to reason over more than simple shortest paths Yet the PRKGC model do not give considerably good results which indicates the nontriviality of RCQEDrm E Although the PRKGC model do not receive supervision about humangenerated NLDs paths with the maximum score match humangenerated NLDs to some extent Supervising path attentions the PRKGCNS model is indeed effective for improving the human interpretability of generated NLDs It also improves the generalization ability of question answering We speculate that Ld functions as a regularizer which helps models to learn reasoning that helpful beyond training data This observation is consistent with previous work where an evidence selection task is learned jointly with a main task BIBREF11 BIBREF2 BIBREF5 As shown in Table TABREF43 as the required derivation step increases the PRKGCNS model suffers from predicting answer entities and generating correct NLDs This indicates that the challenge of RCQEDrm E is in how to extract relevant information from supporting documents and synthesize these multiple facts to derive an answer To obtain further insights we manually analyzed generated NLDs Table TABREF44 a illustrates a positive example where the model identifies that altudoceras belongs to pseudogastrioceratinae and that pseudogastrioceratinae is a subfamily of paragastrioceratidae Some supporting sentences are already similar to humangenerated NLDs thus simply extracting textual relations works well for some problems On the other hand typical derivation error is from nonhuman readable textual relations In b the model states that bumped has a relationship of with hands up which is originally extracted from one of supporting sentences It contains the UK Top 60 singles Bumped Hands Up 4 Lovers and This provides a useful clue for answer prediction but is not suitable as a derivation One may address this issue by incorporating for example a relation extractor or a paraphrasing mechanism using recent advances of conditional language models BIBREF20 To check the integrity of our baseline models we compare our baseline models with existing neural models tailored for QA under the pure WikiHop setting ie evaluation with only an accuracy of predicted answers Note that these existing models do not output derivations We thus cannot make a direct comparison so it servers as a reference purpose Because WikiHop has no answerability task we enforced the PRKGC model to always output answers As shown in Table TABREF45 the PRKGC models achieve a comparable performance to other sophisticated neural models There exists few RC datasets annotated with explanations Table TABREF50 The most similar work to ours is Science QA dataset BIBREF21 BIBREF22 BIBREF23 which provides a small set of NLDs annotated for analysis purposes By developing the scalable crowdsourcing framework our work provides one orderofmagnitude larger NLDs which can be used as a benchmark more reliably In addition it provides the community with new types of challenges not included in HotpotQA There is a large body of work on analyzing the nature of RC datasets motivated by the question to what degree RC models understand natural language BIBREF3 BIBREF4 Several studies suggest that current RC datasets have unintended bias which enables RC systems to rely on a cheap heuristics to answer questions For instance Sugawara2018 show that some of these RC datasets contain a large number of easy questions that can be solved by a cheap heuristics eg by looking at a first few tokens of questions Responding to their findings we take a step further and explore the new task of RC that requires RC systems to give introspective explanations as well as answers In addition recent studies show that current RC models and NLP models are vulnerable to adversarial examples BIBREF29 BIBREF30 BIBREF31 Explicit modeling of NLDs is expected to reguralize RC models which could prevent RC models strong dependence on unintended bias in training data eg annotation artifact BIBREF32 BIBREF8 BIBREF2 BIBREF5 as partially confirmed in Section SECREF46 There are existing NLP tasks that require models to output explanations Table TABREF50 FEVER BIBREF25 requires a system to judge the factness of a claim as well as to identify justification sentences As discussed earlier we take a step further from justification explanations to provide new challenges for NLU Several datasets are annotated with introspective explanations ranging from textual entailments BIBREF8 to argumentative texts BIBREF26 BIBREF27 BIBREF33 All these datasets offer the classification task of single sentences or sentence pairs The uniqueness of our dataset is that it measures a machines ability to extract relevant information from a set of documents and to build coherent logical reasoning steps Towards RC models that can perform correct reasoning we have proposed RCQED that requires a system to output its introspective explanations as well as answers Instantiating RCQED with entitybased multihop QA RCQEDrm E we have created a largescale corpus of NLDs The developed crowdsourcing annotation framework can be used for annotating other QA datasets with derivations Our experiments using two simple baseline models have demonstrated that RCQEDrm E is a nontrivial task and that it indeed provides a challenging task of extracting and synthesizing relevant facts from supporting documents We will make the corpus of reasoning annotations and baseline systems publicly available at httpsnaoyaigithubiorcqed One immediate future work is to expand the annotation to nonentitybased multihop QA datasets such as HotpotQA BIBREF2 For modeling we plan to incorporate a generative mechanism based on recent advances in conditional language modeling Table TABREF53 shows examples of crowdsourced annotations </s>
<s>[INST] Event Outcome Prediction using Sentiment Analysis and Crowd Wisdom in Microblog Feeds [/INST] Over the past few years microblogs have become one of the most popular online social networks Microblogging websites have evolved to become a source of varied kinds of information This is due to the nature of microblogs people post realtime messages about their opinions and express sentiment on a variety of topics discuss current issues complain etc Twitter is one such popular microblogging service where users create status messages called tweets With over 400 million tweets per day on Twitter microblog users generate large amount of data which cover very rich topics ranging from politics sports to celebrity gossip Because the user generated content on microblogs covers rich topics and expresses sentimentopinions of the mass mining and analyzing this information can prove to be very beneficial both to the industrial and the academic community Tweet classification has attracted considerable attention because it has become very important to analyze peoples sentiments and opinions over social networks Most of the current work on analysis of tweets is focused on sentiment analysis BIBREF0 BIBREF1 BIBREF2 Not much has been done on predicting outcomes of events based on the tweet sentiments for example predicting winners of presidential debates based on the tweets by analyzing the users sentiments This is possible intuitively because the sentiment of the users in their tweets towards the candidates is proportional to the performance of the candidates in the debate In this paper we analyze three such events 1 US Presidential Debates 201516 2 Grammy Awards 2013 and 3 Super Bowl 2013 The main focus is on the analysis of the presidential debates For the Grammys and the Super Bowl we just perform sentiment analysis and try to predict the outcomes in the process For the debates in addition to the analysis done for the Grammys and Super Bowl we also perform a trend analysis Our analysis of the tweets for the debates is 3fold as shown below Sentiment Perform a sentiment analysis on the debates This involves building a machine learning model which learns the sentimentcandidate pair candidate is the one to whom the tweet is being directed from the training data and then using this model to predict the sentimentcandidate pairs of new tweets Predicting Outcome Here after predicting the sentimentcandidate pairs on the new data we predict the winner of the debates based on the sentiments of the users Trends Here we analyze certain trends of the debates like the change in sentiments of the users towards the candidates over time hours days months and how the opinion of experts such as Washington Post affect the sentiments of the users For the sentiment analysis we look at our problem in a multilabel setting our two labels being sentiment polarity and the candidatecategory in consideration We test both singlelabel classifiers and multilabel ones on the problem and as intuition suggests the multilabel classifier RaKel performs better A combination of documentembedding features BIBREF3 and topic features essentially the documenttopic probabilities BIBREF4 is shown to give the best results These features make sense intuitively because the documentembedding features take context of the text into account which is important for sentiment polarity classification and topic features take into account the topic of the tweet whowhat is it about The prediction of outcomes of debates is very interesting in our case Most of the results seem to match with the views of some experts such as the political pundits of the Washington Post This implies that certain rules that were used to score the candidates in the debates by saidexperts were in fact reflected by reading peoples sentiments expressed over social media This opens up a wide variety of learning possibilities from users sentiments on social media which is sometimes referred to as the wisdom of crowd We do find out that the public sentiments are not always coincident with the views of the experts In this case it is interesting to check whether the views of the experts can affect the public for example by spreading through the social media microblogs such as Twitter Hence we also conduct experiments to compare the public sentiment before and after the experts views become public and thus notice the impact of the experts views on the public sentiment In our analysis of the debates we observe that in certain debates such as the 5th Republican Debate held on December 15 2015 the opinions of the users vary from the experts But we see the effect of the experts on the sentiment of the users by looking at their opinions of the same candidates the next day Our contributions are mainly we want to see how predictive the sentimentopinion of the users are in social media microblogs and how it compares to that of the experts In essence we find that the crowd wisdom in the microblog domain matches that of the experts in most cases There are cases however where they dont match but we observe that the crowds sentiments are actually affected by the experts This can be seen in our analysis of the presidential debates The rest of the paper is organized as follows in section SECREF2 we review some of the literature In section SECREF3 we discuss the collection and preprocessing of the data Section SECREF4 details the approach taken along with the features and the machine learning methods used Section SECREF7 discusses the results of the experiments conducted and lastly section SECREF8 ends with a conclusion on the results including certain limitations and scopes for improvement to work on in the future Sentiment analysis as a Natural Language Processing task has been handled at many levels of granularity Specifically on the microblog front some of the early results on sentiment analysis are by BIBREF0 BIBREF1 BIBREF2 BIBREF5 BIBREF6 Go et al BIBREF0 applied distant supervision to classify tweet sentiment by using emoticons as noisy labels Kouloumpis et al BIBREF7 exploited hashtags in tweets to build training data Chenhao Tan et al BIBREF8 determined userlevel sentiments on particular topics with the help of the social network graph There has been some work in event detection and extraction in microblogs as well In BIBREF9 the authors describe a way to extract major life events of a user based on tweets that either congratulateoffer condolences BIBREF10 build a keyword graph from the data and then detect communities in this graph cluster to find events This works because words that describe similar events will form clusters In BIBREF11 the authors use distant supervision to extract events There has also been some work on event retrieval in microblogs BIBREF12 In BIBREF13 the authors detect time points in the twitter stream when an important event happens and then classify such events based on the sentiments they evoke using only nontextual features to do so In BIBREF14 the authors study how much of the opinion extracted from Online Social Networks OSN user data is reflective of the opinion of the larger population Researchers have also mined Twitter dataset to analyze public reaction to various events from election debate performance BIBREF15 where the authors demonstrate visuals and metrics that can be used to detect sentiment pulse anomalies in that pulse and indications of controversial topics that can be used to inform the design of visual analytic systems for social media events to movie boxoffice predictions on the release day BIBREF16 Mishne and Glance BIBREF17 correlate sentiments in blog posts with movie boxoffice scores The correlations they observed for positive sentiments are fairly low and not sufficient to use for predictive purposes Recently several approaches involving machine learning and deep learning have also been used in the visual and language domains BIBREF18 BIBREF19 BIBREF20 BIBREF21 BIBREF22 BIBREF23 BIBREF24 Twitter is a social networking and microblogging service that allows users to post realtime messages called tweets Tweets are very short messages a maximum of 140 characters in length Due to such a restriction in length people tend to use a lot of acronyms shorten words etc In essence the tweets are usually very noisy There are several aspects to tweets such as 1 Target Users use the symbol in their tweets to refer to other users on the microblog 2 Hashtag Hashtags are used by users to mark topics This is done to increase the visibility of the tweets We conduct experiments on 3 different datasets as mentioned earlier 1 US Presidential Debates 2 Grammy Awards 2013 3 Superbowl 2013 To construct our presidential debates dataset we have used the Twitter Search API to collect the tweets Since there was no publicly available dataset for this we had to collect the data manually The data was collected on 10 different presidential debates 7 republican and 3 democratic from October 2015 to March 2016 Different hashtags like GOP GOPDebate were used to filter out tweets specific to the debate This is given in Table TABREF2 We extracted only english tweets for our dataset We collected a total of 104961 tweets were collected across all the debates But there were some limitations with the API Firstly the server imposes a rate limit and discards tweets when the limit is reached The second problem is that the API returns many duplicates Thus after removing the duplicates and irrelevant tweets we were left with a total of 17304 tweets This includes the tweets only on the day of the debate We also collected tweets on the days following the debate As for the other two datasets we collected them from availableonline repositories There were a total of 2580062 tweets for the Grammy Awards 2013 and a total of 2428391 tweets for the Superbowl 2013 The statistics are given in Tables TABREF3 and TABREF3 The tweets for the grammy were before the ceremony and during However we only use the tweets before the ceremony after the nominations were announced to predict the winners As for the superbowl the tweets collected were during the game But we can predict interesting things like Most Valuable Player etc from the tweets The tweets for both of these datasets were annotated and thus did not require any human intervention However the tweets for the debates had to be annotated Since we are using a supervised approach in this paper we have all the tweets for debates in the training set humanannotated The tweets were already annotated for the Grammys and Super Bowl Some statistics about our datasets are presented in Tables TABREF3 TABREF3 and TABREF3 The annotations for the debate dataset comprised of 2 labels for each tweet 1 Candidate This is the candidate of the debate to whom the tweet refers to 2 Sentiment This represents the sentiment of the tweet towards that candidate This is either positive or negative The task then becomes a case of multilabel classification The candidate labels are not so trivial to obtain because there are tweets that do not directly contain any candidates name For example the tweets a business man for president and a doctor might sure bring about a change in America are about Donal Trump and Ben Carson respectively Thus it is meaningful to have a multilabel task The annotations for the other two datasets are similar in that one of the labels was the sentiment and the other was categorydependent in the outcomeprediction task as discussed in the sections below For example if we are trying to predict the Album of the Year winners for the Grammy dataset the second label would be the nominees for that category album of the year As noted earlier tweets are generally noisy and thus require some preprocessing done before using them Several filters were applied to the tweets such as 1 Usernames Since users often include usernames in their tweets to direct their message we simplify it by replacing the usernames with the token USER For example michael will be replaced by USER 2 URLs In most of the tweets users include links that add on to their text message We convertreplace the link address to the token URL 3 Repeated Letters Oftentimes users use repeated letters in a word to emphasize their notion For example the word lol which stands for laugh out loud is sometimes written as looooool to emphasize the degree of funnyness We replace such repeated occurrences of letters more than 2 with just 3 occurrences We replace with 3 occurrences and not 2 so that we can distinguish the exaggerated usage from the regular ones 4 Multiple Sentiments Tweets which contain multiple sentiments are removed such as I hate Donald Trump but I will vote for him This is done so that there is no ambiguity 5 Retweets In Twitter many times tweets of a person are copied and posted by another user This is known as retweeting and they are commonly abbreviated with RT These are removed and only the original tweets are processed 6 Repeated Tweets The Twitter API sometimes returns a tweet multiple times We remove such duplicates to avoid putting extra weight on any particular tweet Our analysis of the debates is 3fold including sentiment analysis outcome prediction and trend analysis Sentiment Analysis To perform a sentiment analysis on the debates we first extract all the features described below from all the tweets in the training data We then build the different machine learning models described below on these set of features After that we evaluate the output produced by the models on unseen test data The models essentially predict candidatesentiment pairs for each tweet Outcome Prediction Predict the outcome of the debates After obtaining the sentiments on the test data for each tweet we can compute the net normalized sentiment for each presidential candidate in the debate This is done by looking at the number of positive and negative sentiments for each candidate We then normalize the sentiment scores of each candidate to be in the same scale 01 After that we rank the candidates based on the sentiment scores and predict the top k as the winners Trend Analysis We also analyze some certain trends of the debates Firstly we look at the change in sentiments of the users towards the candidates over time hours days months This is done by computing the sentiment scores for each candidate in each of the debates and seeing how it varies over time across debates Secondly we examine the effect of Washington Post on the views of the users This is done by looking at the sentiments of the candidates to predict winners of a debate before and after the winners are announced by the experts in Washington Post This way we can see if Washington Post has had any effect on the sentiments of the users Besides that to study the behavior of the users we also look at the correlation of the tweet volume with the number of viewers as well as the variation of tweet volume over time hours days months for debates As for the Grammys and the Super Bowl we only perform the sentiment analysis and predict the outcomes We compare 4 different models for performing our task of sentiment classification We then pick the best performing model for the task of outcome prediction Here we have two categories of algorithms singlelabel and multilabel We already discussed above why it is meaningful to have a multilabel task earlier because one can represent candidate sentiment as a single class label or have candidate and sentiment as two separate labels They are listed below Naive Bayes We use a multinomial Naive Bayes model A tweet t is assigned a class c such that where there are m features and fi represents the ith feature Support Vector Machines Support Vector Machines SVM constructs a hyperplane or a set of hyperplanes in a highdimensional space which can then be used for classification In our case we use SVM with Sequential Minimal Optimization SMO BIBREF25 which is an algorithm for solving the quadratic programming QP problem that arises during the training of SVMs Elman Recurrent Neural Network Recurrent Neural Networks RNNs are gaining popularity and are being applied to a wide variety of problems They are a class of artificial neural networks where connections between units form a directed cycle This creates an internal state of the network which allows it to exhibit dynamic temporal behavior The Elman RNN was proposed by Jeff Elman in the year 1990 BIBREF26 We use this in our task RAkEL RAndom k labELsets RAkEL BIBREF27 is a multilabel classification algorithm that uses labeled powerset LP transformation it basically creates a single binary classifier for every label combination and then uses multiple LP classifiers each trained on a random subset of the actual labels for classification In order to classify the tweets a set of features is extracted from each of the tweets such as ngram partofspeech etc The details of these features are given below ngram This represents the frequency counts of ngrams specifically that of unigrams and bigrams punctuation The number of occurrences of punctuation symbols such as commas exclamation marks etc POS partofspeech The frequency of each POS tagger is used as the feature prior polarity scoring Here we obtain the prior polarity of the words BIBREF6 using the Dictionary of Affect in Language DAL BIBREF28 This dictionary DAL of about 8000 English words assigns a pleasantness score to each word on a scale of 13 After normalizing we can assign the words with polarity higher than 08 as positive and less than 05 as negative If a word is not present in the dictionary we lookup its synonyms in WordNet if this word is there in the dictionary we assign the original word its synonyms score Twitter Specific features Number of hashtags symbol Number of mentioning users symbol Number of hyperlinks Document embedding features Here we use the approach proposed by Mikolov et al BIBREF3 to embed an entire tweet into a vector of features Topic features Here LDA Latent Dirichlet Allocation BIBREF4 is used to extract topicspecific features for a tweet document This is basically the topicdocument probability that is outputted by the model In the following experiments we use 1gram 2gram and 12gram to denote unigram bigram and a combination of unigram and bigram features respectively We also combine punctuation and the other features as miscellaneous features and use MISC to denote this We represent the documentembedding features by DOC and topicspecific features by TOPIC In this section we analyze the presidential debates data and show some trends First we look at the trend of the tweet frequency Figure FIGREF21 shows the trends of the tweet frequency and the number of TV viewers as the debates progress over time We observe from Figures FIGREF21 and FIGREF21 that for the first 5 debates considered the trend of the number of TV viewers matches the trend of the number of tweets However we can see that towards the final debates the frequency of the tweets decreases consistently This shows an interesting fact that although the people still watch the debates the number of people who tweet about it are greatly reduced But the tweeting community are mainly youngsters and this shows that most of the tweeting community who actively tweet didnt watch the later debates Because if they did then the trends should ideally match Next we look at how the tweeting activity is on days of the debate specifically on the day of the debate the next day and two days later Figure FIGREF22 shows the trend of the tweet frequency around the day of the 5th republican debate ie December 15 2015 As can be seen the average number of people tweet more on the day of the debate than a day or two after it This makes sense intuitively because the debate would be fresh in their heads Then we look at how people tweet in the hours of the debate specifically during the debate one hour after and then two hours after Figure FIGREF23 shows the trend of the tweet frequency around the hour of the 5th republican debate ie December 15 2015 We notice that people dont tweet much during the debate but the activity drastically increases after two hours This might be because people were busy watching the debate and then taking some time to process things so that they can give their opinion We have seen the frequency of tweets over time in the previous trends Now we will look at how the sentiments of the candidates change over time First Figure FIGREF24 shows how the sentiments of the candidates changed across the debates We find that many of the candidates have had ups and downs towards in the debates But these trends are interesting in that it gives some useful information about what went down in the debate that caused the sentiments to change sometimes drastically For example if we look at the graph for Donald Trump we see that his sentiment was at its lowest point during the debate held on December 15 Looking into the debate we can easily see why this was the case At a certain point in the debate Trump was asked about his ideas for the nuclear triad It is very important that a presidential candidate knows about this but Trump had no idea what the nuclear triad was and in a transparent attempt to cover his tracks resorted to a we need to be strong speech It can be due to this embarrassment that his sentiment went down during this debate Next we investigate how the sentiments of the users towards the candidates change before and after the debate In essence we examine how the debate and the results of the debates given by the experts affects the sentiment of the candidates Figure FIGREF25 shows the sentiments of the users towards the candidate during the 5th Republican Debate 15th December 2015 It can be seen that the sentiments of the users towards the candidates does indeed change over the course of two days One particular example is that of Jeb Bush It seems that the populace are generally prejudiced towards the candidates which is reflected in their sentiments of the candidates on the day of the debate The results of the Washington Post are released in the morning after the debate One can see the winners suggested by the Washington Post in Table TABREF35 One of the winners in that debate according to them is Jeb Bush Coincidentally Figure FIGREF25 suggests that the sentiment of Bush has gone up one day after the debate essentially one day after the results given by the experts are out There is some influence for better or worse of these experts on the minds of the users in the early debates but towards the final debates the sentiments of the users are mostly unwavering as can be seen in Figure FIGREF25 Figure FIGREF25 is on the last Republican debate and suggests that the opinions of the users do not change much with time Essentially the users have seen enough debates to make up their own minds and their sentiments are not easily wavered In this section we define the different evaluation metrics that we use for different tasks We have two tasks at hand 1 Sentiment Analysis 2 Outcome Prediction We use different metrics for these two tasks In the study of sentiment analysis we use Hamming Loss to evaluate the performance of the different methods Hamming Loss based on Hamming distance takes into account the prediction error and the missing error normalized over the total number of classes and total number of examples BIBREF29 The Hamming Loss is given below where D is the number of examples in the dataset and L is the number of labels Si and Yi denote the sets of true and predicted labels for instance i respectively oplus stands for the XOR operation BIBREF30 Intuitively the performance is better when the Hamming Loss is smaller 0 would be the ideal case For the case of outcome prediction we will have a predicted set and an actual set of results Thus we can use common information retrieval metrics to evaluate the prediction performance Those metrics are listed below Mean Fmeasure Fmeasure takes into account both the precision and recall of the results In essence it takes into account how many of the relevant results are returned and also how many of the returned results are relevant where D is the number of queries debatescategories for grammy winners etc Pi and Ri are the precision and recall for the ith query Mean Average Precision As a standard metric used in information retrieval Mean Average Precision for a set of queries is mean of the average precision scores for each query where D is the number of queries eg debates Pik is the precision at k Pk for the ith query relik is an indicator function equaling 1 if the document at position k for the ith query is relevant else 0 and RDi is the number of relevant documents for the ith query We use 3 different datasets for the problem of sentiment analysis as already mentioned We test the four different algorithms mentioned in Section SECREF6 with a different combination of features that are described in Section SECREF10 To evaluate our models we use the Hamming Loss metric as discussed in Section SECREF6 We use this metric because our problem is in the multiclass classification domain However the singlelabel classifiers like SVM Naive Bayes Elman RNN cannot be evaluated against this metric directly To do this we split the predicted labels into a format that is consistent with that of multilabel classifiers like RaKel The results of the experiments for each of the datasets are given in Tables TABREF34 TABREF34 and TABREF34 In the table f1 f2 f3 f4 f5 and f6 denote the features 1gram 2gram 12gram 12gram MISC DOC and DOC TOPIC respectively Note that lower values of hamming losses are more desirable We find that RaKel performs the best out of all the algorithms RaKel is more suited for the task because our task is a multiclass classification Among all the singlelabel classifiers SVM performs the best We also observe that as we use more complex feature spaces the performance increases This is true for almost all of the algorithms listed Our best performing features is a combination of paragraph embedding features and topic features from LDA This makes sense intuitively because paragraphembedding takes into account the context in the text Context is very important in determining the sentiment of a short text having negative words in the text does not always mean that the text contains a negative sentiment For example the sentence never say never is not a bad thing has many negative words but the sentence as a whole does not have a negative sentiment This is why we need some kind of context information to accurately determine the sentiment Thus with these embedded features one would be able to better determine the polarity of the sentence The other label is the entity candidatesong etc in consideration Topic features here make sense because this can be considered as the topic of the tweet in some sense This can be done because that label captures what or whom the tweet is about In this section we show the results for the outcomeprediction of the events RaKel as the best performing method is trained to predict the sentimentlabels for the unlabeled data The sentiment labels are then used to determine the outcome of the events In the Tables TABREF35 TABREF36 TABREF37 of outputs given we only show as many predictions as there are winners The results obtained for the outcome prediction task for the US presidential debates is shown in Table TABREF35 Table TABREF35 shows the winners as given in the Washington Post 3rd column and the winners that are predicted by our system 2nd column By comparing the set of results obtained from both the sources we find that the set of candidates predicted match to a large extent with the winners given out by the Washington Post The result suggests that the opinions of the social media community match with that of the journalists in most cases A Grammy Award is given to outstanding achievement in the music industry There are two types of awards General Field awards which are not restricted by genre and genrespecific awards Since there can be upto 80 categories of awards we only focus on the main 4 1 Album of the Year 2 Record of the Year 3 Song of the Year and 4 Best New Artist These categories are the main in the awards ceremony and most looked forward to That is also why we choose to predict the outcomes of these categories based on the tweets We use the tweets before the ceremony but after the nominations to predict the outcomes Basically we have a list of nominations for each category We filter the tweets based on these nominations and then predict the winner as with the case of the debates The outcomes are listed in Table TABREF36 We see that largely the opinion of the users on the social network agree with the deciding committee of the awards The winners agree for all the categories except Song of the Year The Super Bowl is the annual championship game of the National Football League We have collected the data for the year 2013 Here the match was between the Baltimore Ravens and the San Francisco 49ers The tweets that we have collected are during the game From these tweets one could trivially determine the winner But an interesting outcome would be to predict the Most Valuable Player MVP during the game To determine this all the tweets were looked at and we determined the candidate with the highest positive sentiment by the end of the game The result in Table TABREF37 suggests that we are able to determine the outcomes accurately Table TABREF43 displays some evaluation metrics for this task These were computed based on the predicted outcomes and the actual outcomes for each of the different datasets Since the number of participants varies from debatetodebate or categorytocategory for Grammy etc we cannot return a fixed number of winners for everything So the size of our returned rankedlist is set to half of the number of participants except for the MVP for Super Bowl there are so many players and returning half of them when only one of them is relevant is meaningless So we just return the top 10 players As we can see from the metrics the predicted outcomes match quite well with the actual ones or the ones given by the experts This paper presents a study that compares the opinions of users on microblogs which is essentially the crowd wisdom to that of the experts in the field Specifically we explore three datasets US Presidential Debates 201516 Grammy Awards 2013 Super Bowl 2013 We determined if the opinions of the crowd and the experts match by using the sentiments of the tweets to predict the outcomes of the debatesGrammysSuper Bowl We observed that in most of the cases the predictions were right indicating that crowd wisdom is indeed worth looking at and mining sentiments in microblogs is useful In some cases where there were disagreements however we observed that the opinions of the experts did have some influence on the opinions of the users We also find that the features that were most useful in our case of multilabel classification was a combination of the documentembedding and topic features </s>
<s>[INST] Learning High-order Structural and Attribute information by Knowledge Graph Attention Networks for Enhancing Knowledge Graph Embedding [/INST] In the past decade many largescale Knowledge Graphs KGs such as Freebase BIBREF0 DBpedia BIBREF1 and YAGO BIBREF2 have been built to represent human complex knowledge about the realworld in the machinereadable format The facts in KGs are usually encoded in the form of triples textit head entity relation textit tail entity denoted h r t in this study through the Resource Description Framework egtextit Donald Trump Born In textit New York City Figure FIGREF2 shows the subgraph of knowledge graph about the family of Donald Trump In many KGs we can observe that some relations indicate attributes of entities such as the textit Born and textit Abstract in Figure FIGREF2 and others indicates the relations between entities the head entity and tail entity are real world entity Hence the relationship in KG can be divided into relations and attributes and correspondingly two types of triples namely relation triples and attribute triples BIBREF3 A relation triples in KGs represents relationship between entities egtextit Donald TrumpFather of textit Ivanka Trump while attribute triples denote a literal attribute value of an entity egtextit Donald TrumpBorn textit June 14 1946 Knowledge graphs have became important basis for many artificial intelligence applications such as recommendation system BIBREF4 question answering BIBREF5 and information retrieval BIBREF6 which is attracting growing interests in both academia and industry communities A common approach to apply KGs in these artificial intelligence applications is through embedding which provide a simple method to encode both entities and relations into a continuous lowdimensional embedding spaces Hence learning distributional representation of knowledge graph has attracted many research attentions in recent years TransE BIBREF7 is a seminal work in representation learning lowdimensional vectors for both entities and relations The basic idea behind TransE is that the embedding textbf t of tail entity should be close to the head entitys embedding textbf r plus the relation vector textbf t if h r t holds which indicates textbf htextbf rapprox textbf t This model provide a flexible way to improve the ability in completing the KGs such as predicating the missing items in knowledge graph Since then several methods like TransH BIBREF8 and TransR BIBREF9 which represent the relational translation in other effective forms have been proposed Recent attempts focused on either incorporating extra information beyond KG triples BIBREF10 BIBREF11 BIBREF12 BIBREF13 or designing more complicated strategies BIBREF14 BIBREF15 BIBREF16 While these methods have achieved promising results in KG completion and link predication existing knowledge graph embedding methods still have room for improvement First TransE and its most extensions only take direct relations between entities into consideration We argue that the highorder structural relationship between entities also contain rich semantic relationships and incorporating these information can improve model performance For example the fact textit Donald TrumpstackrelFather oflongrightarrow textit Ivanka TrumpstackrelSpouselongrightarrow textit Jared Kushner indicates the relationship between entity Donald Trump and entity Jared Kushner Several pathbased methods have attempted to take multiplestep relation paths into consideration for learning highorder structural information of KGs BIBREF17 BIBREF18 But note that huge number of paths posed a critical complexity challenge on these methods In order to enable efficient path modeling these methods have to make approximations by sampling or applying path selection algorithm We argue that making approximations has a large impact on the final performance Second to the best of our knowledge most existing knowledge graph embedding methods just leverage relation triples of KGs while ignoring a large number of attribute triples Therefore these methods easily suffer from sparseness and incompleteness of knowledge graph Even worse structure information usually cannot distinguish the different meanings of relations and entities in different triples We believe that these rich information encoded in attribute triples can help explore rich semantic information and further improve the performance of knowledge graph For example we can learn date of birth and abstraction from values of Born and Abstract about Donald Trump in Figure FIGREF2 There are a huge number of attribute triples in real KGs for example the statistical results in BIBREF3 shows attribute triples are three times as many as relationship triples in English DBpedia 201604 Recent a few attempts try to incorporate attribute triples BIBREF11 BIBREF12 However these are two limitations existing in these methods One is that only a part of attribute triples are used in the existing methods such as only entity description is used in BIBREF12 The other is some attempts try to jointly model the attribute triples and relation triples in one unified optimization problem The loss of two kinds triples has to be carefully balanced during optimization For example BIBREF3 use hyperparameters to weight the loss of two kinds triples in their models Considering limitations of existing knowledge graph embedding methods we believe it is of critical importance to develop a model that can capture both highorder structural and attribute information of KGs in an efficient explicit and unified manner Towards this end inspired by the recent developments of graph convolutional networks GCN BIBREF19 which have the potential of achieving the goal but have not been explored much for knowledge graph embedding we propose Knowledge Graph Attention Networks for Enhancing Knowledge Graph Embedding KANE The key ideal of KANE is to aggregate all attribute triples with bias and perform embedding propagation based on relation triples when calculating the representations of given entity Specifically two carefully designs are equipped in KANE to correspondingly address the above two challenges 1 recursive embedding propagation based on relation triples which updates a entity embedding Through performing such recursively embedding propagation the highorder structural information of kGs can be successfully captured in a linear time complexity and 2 multihead attentionbased aggregation The weight of each attribute triples can be learned through applying the neural attention mechanism BIBREF20 In experiments we evaluate our model on two KGs tasks including knowledge graph completion and entity classification Experimental results on three datasets shows that our method can significantly outperforms stateofarts methods The main contributions of this study are as follows 1 We highlight the importance of explicitly modeling the highorder structural and attribution information of KGs to provide better knowledge graph embedding 2 We proposed a new method KANE which achieves can capture both highorder structural and attribute information of KGs in an efficient explicit and unified manner under the graph convolutional networks framework 3 We conduct experiments on three datasets demonstrating the effectiveness of KANE and its interpretability in understanding the importance of highorder relations In recent years there are many efforts in Knowledge Graph Embeddings for KGs aiming to encode entities and relations into a continuous lowdimensional embedding spaces Knowledge Graph Embedding provides a very simply and effective methods to apply KGs in various artificial intelligence applications Hence Knowledge Graph Embeddings has attracted many research attentions in recent years The general methodology is to define a score function for the triples and finally learn the representations of entities and relations by minimizing the loss function frht which implies some types of transformations on textbf h and textbf t TransE BIBREF7 is a seminal work in knowledge graph embedding which assumes the embedding textbf t of tail entity should be close to the head entitys embedding textbf r plus the relation vector textbf t when h r t holds as mentioned in section Introduction Hence TransE defines the following loss function TransE regarding the relation as a translation between head entity and tail entity is inspired by the word2vec BIBREF21 where relationships between words often correspond to translations in latent feature space This model achieves a good tradeoff between computational efficiency and accuracy in KGs with thousands of relations but this model has flaws in dealing with onetomany manytoone and manytomany relations In order to address this issue TransH BIBREF8 models a relation as a relationspecific hyperplane together with a translation on it allowing entities to have distinct representation in different relations TransR BIBREF9 models entities and relations in separate spaces ie entity space and relation spaces and performs translation from entity spaces to relation spaces TransD BIBREF22 captures the diversity of relations and entities simultaneously by defining dynamic mapping matrix Recent attempts can be divided into two categories i those which tries to incorporate additional information to further improve the performance of knowledge graph embedding eg entity types or concepts BIBREF13 relations paths BIBREF17 textual descriptions BIBREF11 BIBREF12 and logical rules BIBREF23 ii those which tries to design more complicated strategies eg deep neural network models BIBREF24 Except for TransE and its extensions some efforts measure plausibility by matching latent semantics of entities and relations The basic idea behind these models is that the plausible triples of a KG is assigned low energies For examples Distant Model BIBREF25 defines two different projections for head and tail entity in a specific relation ie textbf Mr1 and textbf Mr2 It represents the vectors of head and tail entity can be transformed by these two projections The loss function is frhttextbf Mr1textbf htextbf Mr2textbf t1 Our KANE is conceptually advantageous to existing methods in that 1 it directly factors highorder relations into the predictive model in linear time which avoids the labor intensive process of materializing paths thus is more efficient and convenient to use 2 it directly encodes all attribute triples in learning representation of entities which can capture rich semantic information and further improve the performance of knowledge graph embedding and 3 KANE can directly factors highorder relations and attribute information into the predictive model in an efficient explicit and unified manner thus all related parameters are tailored for optimizing the embedding objective In this study wo consider two kinds of triples existing in KGs relation triples and attribute triples Relation triples denote the relation between entities while attribute triples describe attributes of entities Both relation and attribute triples denotes important information about entity we will take both of them into consideration in the task of learning representation of entities We let I denote the set of IRIs Internationalized Resource Identifier B are the set of blank nodes and L are the set of literals denoted by quoted strings The relation triples and attribute triples can be formalized as follows Definition 1 Relation and Attribute Triples A set of Relation triples TR can be represented by TR subset E times R times E where E subset I cup B is set of entities R subset I is set of relations between entities Similarly TA subset E times R times A is the set of attribute triples where A subset I cup B cup L is the set of attribute values Definition 2 Knowledge Graph A KG consists of a combination of relation triples in the form of h r tin TR and attribute triples in form of h r ain TA Formally we represent a KG as GERATRTA where Elbrace hthrtin TR cup hrain TArbrace is set of entities R lbrace rhrtin TR cup hrain TArbrace is set of relations Albrace ahrain TArbrace respectively The purpose of this study is try to use embeddingbased model which can capture both highorder structural and attribute information of KGs that assigns a continuous representations for each element of triples in the form textbf h textbf r textbf t and textbf h textbf r textbf a where Boldfaced textbf hin mathbb Rk textbf rin mathbb Rk textbf tin mathbb Rk and textbf ain mathbb Rk denote the embedding vector of head entity h relation r tail entity t and attribute a respectively Next we detail our proposed model which models both highorder structural and attribute information of KGs in an efficient explicit and unified manner under the graph convolutional networks framework In this section we present the proposed model in detail We first introduce the overall framework of KANE then discuss the input embedding of entities relations and values in KGs the design of embedding propagation layers based on graph attention network and the loss functions for link predication and entity classification task respectively The process of KANE is illustrated in Figure FIGREF2 We introduce the architecture of KANE from left to right As shown in Figure FIGREF2 the whole triples of knowledge graph as input The task of attribute embedding lays is embedding every value in attribute triples into a continuous vector space while preserving the semantic information To capture both highorder structural information of KGs we used an attentionbased embedding propagation method This method can recursively propagate the embeddings of entities from an entitys neighbors and aggregate the neighbors with different weights The final embedding of entities relations and values are feed into two different deep neural network for two different tasks including link predication and entity classification The value in attribute triples usually is sentence or a word To encode the representation of value from its sentence or word we need to encode the variablelength sentences to a fixedlength vector In this study we adopt two different encoders to model the attribute value BagofWords Encoder The representation of attribute value can be generated by a summation of all words embeddings of values We denote the attribute value a as a word sequence a w1wn where wi is the word at position i The embedding of textbf a can be defined as follows where textbf wiin mathbb Rk is the word embedding of wi BagofWords Encoder is a simple and intuitive method which can capture the relative importance of words But this method suffers in that two strings that contains the same words with different order will have the same representation LSTM Encoder In order to overcome the limitation of BagofWord encoder we consider using LSTM networks to encoder a sequence of words in attribute value into a single vector The final hidden state of the LSTM networks is selected as a representation of the attribute value where flstm is the LSTM network Next we describe the details of recursively embedding propagation method building upon the architecture of graph convolution network Moreover by exploiting the idea of graph attention network out method learn to assign varying levels of importance to entity in every entitys neighborhood and can generate attentive weights of cascaded embedding propagation In this study embedding propagation layer consists of two mainly components attentive embedding propagation and embedding aggregation Here we start by describing the attentive embedding propagation Attentive Embedding Propagation Considering an KG G the input to our layer is a set of entities relations and attribute values embedding We use textbf hin mathbb Rk to denote the embedding of entity h The neighborhood of entity h can be described by mathcal Nh lbrace tahrtin TR cup hrain TArbrace The purpose of attentive embedding propagation is encode mathcal Nh and output a vector vectextbf h as the new embedding of entity h In order to obtain sufficient expressive power one learnable linear transformation textbf Win mathbb Rkprime times k is adopted to transform the input embeddings into higher level feature space In this study we take a triple hrt as example and the output a vector vectextbf h can be formulated as follows where pi hrt is attention coefficients which indicates the importance of entitys t to entities h In this study the attention coefficients also control how many information being propagated from its neighborhood through the relation To make attention coefficients easily comparable between different entities the attention coefficient pi hrt can be computed using a softmax function over all the triples connected with h The softmax function can be formulated as follows Hereafter we implement the attention coefficients pi hrt through a singlelayer feedforward neural network which is formulated as follows where the leakyRelu is selected as activation function As shown in Equation DISPLAYFORM13 the attention coefficient score is depend on the distance head entity h and the tail entity t plus the relation r which follows the idea behind TransE that the embedding textbf t of head entity should be close to the tail entitys embedding textbf r plus the relation vector textbf t if h r t holds Embedding Aggregation To stabilize the learning process of attention we perform multihead attention on final layer Specifically we use m attention mechanism to execute the transformation of Equation DISPLAYFORM11 A aggregators is needed to combine all embeddings of multihead graph attention layer In this study we adapt two types of aggregators Concatenation Aggregator concatenates all embeddings of multihead graph attention followed by a nonlinear transformation where mathop Big Big represents concatenation pi hrti are normalized attention coefficient computed by the ith attentive embedding propagation and textbf Wi denotes the linear transformation of input embedding Averaging Aggregator sums all embeddings of multihead graph attention and the output embedding in the final is calculated applying averaging In order to encode the highorder connectivity information in KGs we use multiple embedding propagation layers to gathering the deep information propagated from the neighbors More formally the embedding of entity h in lth layers can be defined as follows After performing L embedding propagation layers we can get the final embedding of entities relations and attribute values which include both highorder structural and attribute information of KGs Next we discuss the loss functions of KANE for two different tasks and introduce the learning and optimization detail Here we introduce the learning and optimization details for our method Two different loss functions are carefully designed fro two different tasks of KG which include knowledge graph completion and entity classification Next details of these two loss functions are discussed knowledge graph completion This task is a classical task in knowledge graph representation learning community Specifically two subtasks are included in knowledge graph completion entity predication and link predication Entity predication aims to infer the impossible headtail entities in testing datasets when one of them is missing while the link predication focus on complete a triple when relation is missing In this study we borrow the idea of translational scoring function from TransE which the embedding textbf t of tail entity should be close to the head entitys embedding textbf r plus the relation vector textbf t if h r t holds which indicates dhrt textbf htextbf r textbf t Specifically we train our model using hingeloss function given formally as where gamma 0 is a margin hyperparameter x denotes the positive part of x TTR cup TA is the set of valid triples and Tprime is set of corrupted triples which can be formulated as Entity Classification For the task of entity classification we simple uses a fully connected layers and binary crossentropy loss BCE over sigmoid activation on the output of last layer We minimize the binary crossentropy on all labeled entities given formally as where ED is the set of entities indicates have labels C is the dimension of the output features which is equal to the number of classes yej is the label indicator of entity e for jth class and sigma x is sigmoid function sigma x frac11ex We optimize these two loss functions using minibatch stochastic gradient decent SGD over the possible textbf h textbf r textbf t with the chin rule that applying to update all parameters At each step we update the parameter textbf htau 1leftarrow textbf htau lambda nabla textbf hmathcal L where tau labels the iteration step and lambda is the learning rate In this study we evaluate our model on three real KG including two typical largescale knowledge graph Freebase BIBREF0 DBpedia BIBREF1 and a selfconstruction game knowledge graph First we adapt a dataset extracted from Freebase ie FB24K which used by BIBREF26 Then we collect extra entities and relations that from DBpedia which that they should have at least 100 mentions BIBREF7 and they could link to the entities in the FB24K by the sameAs triples Finally we build a datasets named as DBP24K In addition we build a game datasets from our game knowledge graph named as Game30K The statistics of datasets are listed in Table TABREF24 In evaluation we compare our method with three types of models 1 Typical Methods Three typical knowledge graph embedding methods includes TransE TransR and TransH are selected as baselines For TransE the dissimilarity measure is implemented with L1norm and relation as well as entity are replaced during negative sampling For TransR we directly use the source codes released in BIBREF9 In order for better performance the replacement of relation in negative sampling is utilized according to the suggestion of author 2 Pathbased Methods We compare our method with two typical pathbased model include PTransE and ALLPATHS BIBREF18 PTransE is the first method to model relation path in KG embedding task and ALLPATHS improve the PTransE through a dynamic programming algorithm which can incorporate all relation paths of bounded length 3 Attributeincorporated Methods Several stateofart attributeincorporated methods including RGCN BIBREF24 and KREAR BIBREF26 are used to compare with our methods on three real datasets In addition four variants of KANE which each of which correspondingly defines its specific way of computing the attribute value embedding and embedding aggregation are used as baseline in evaluation In this study we name four three variants as KANE BOWConcatenation KANE BOWAverage and KANE LSTMConcatenation KANE LSTMAverage Our method is learned with minibatch SGD As for hyperparameters we select batch size among 16 32 64 128 learning rate lambda for SGD among 01 001 0001 For a fair comparison we also set the vector dimensions of all entity and relation to the same k in 128 258 512 1024 the same dissimilarity measure l1 or l2 distance in loss function and the same number of negative examples n among 1 10 20 40 The training time on both data sets is limited to at most 400 epochs The best models are selected by a grid search and early stopping on validation sets In entity classification the aim is to predicate the type of entity For all baseline models we first get the entity embedding in different datasets through default parameter settings as in their original papers or implementationsThen Logistic Regression is used as classifier which regards the entitys embeddings as feature of classifier In evaluation we random selected 10 of training set as validation set and accuracy as evaluation metric Experimental results of entity classification on the test sets of all the datasets is shown in Table TABREF25 The results is clearly demonstrate that our proposed method significantly outperforms stateofart results on accuracy for three datasets For more indepth performance analysis we note 1 Among all baselines Pathbased methods and Attributeincorporated methods outperform three typical methods This indicates that incorporating extra information can improve the knowledge graph embedding performance 2 Four variants of KANE always outperform baseline methods The main reasons why KANE works well are two fold 1 KANE can capture highorder structural information of KGs in an efficient explicit manner and passe these information to their neighboring 2 KANE leverages rich information encoded in attribute triples These rich semantic information can further improve the performance of knowledge graph 3 The variant of KANE that use LSTM Encoder and Concatenation aggregator outperform other variants The main reasons is that LSTM encoder can distinguish the word order and concatenation aggregator combine all embedding of multihead attention in a higher leaver feature space which can obtain sufficient expressive power Figure FIGREF30 shows the test accuracy with increasing epoch on DBP24K and Game30K We can see that test accuracy first rapidly increased in the first ten iterations but reaches a stable stages when epoch is larger than 40 Figure FIGREF31 shows test accuracy with different embedding size and training data proportions We can note that too small embedding size or training data proportions can not generate sufficient global information In order to further analysis the embeddings learned by our method we use tSNE tool BIBREF27 to visualize the learned embedding Figure FIGREF32 shows the visualization of 256 dimensional entitys embedding on Game30K learned by KANE RGCN PransE and TransE We observe that our method can learn more discriminative entitys embedding than other other methods The purpose of knowledge graph completion is to complete a triple h r t when one of h r t is missing which is used many literature BIBREF7 Two measures are considered as our evaluation metrics 1 the mean rank of correct entities or relations Mean Rank 2 the proportion of correct entities or relations ranked in top1 Hits1 for relations or top 10 Hits10 for entities Following the setting in BIBREF7 we also adopt the two evaluation settings named raw and filter in order to avoid misleading behavior The results of entity and relation predication on FB24K are shown in the Table TABREF33 This results indicates that KANE still outperforms other baselines significantly and consistently This also verifies the necessity of modeling highorder structural and attribute information of KGs in Knowledge graph embedding models Many recent works have demonstrated the benefits of knowledge graph embedding in knowledge graph completion such as relation extraction However We argue that knowledge graph embedding method still have room for improvement First TransE and its most extensions only take direct relations between entities into consideration Second most existing knowledge graph embedding methods just leverage relation triples of KGs while ignoring a large number of attribute triples In order to overcome these limitation inspired by the recent developments of graph convolutional networks we propose a new knowledge graph embedding methods named KANE The key ideal of KANE is to aggregate all attribute triples with bias and perform embedding propagation based on relation triples when calculating the representations of given entity Empirical results on three datasets show that KANE significantly outperforms seven stateofarts methods </s>
<s>[INST] A Computational Approach to Automatic Prediction of Drunk Texting [/INST] The ubiquity of communication devices has made social media highly accessible The content on these media reflects a users daytoday activities This includes content created under the influence of alcohol In popular culture this has been referred to as drunktexting In this paper we introduce automatic drunktexting prediction as a computational task Given a tweet the goal is to automatically identify if it was written by a drunk user We refer to tweets written under the influence of alcohol as drunk tweets and the opposite as sober tweets A key challenge is to obtain an annotated dataset We use hashtagbased supervision so that the authors of the tweets mention if they were drunk at the time of posting a tweet We create three datasets by using different strategies that are related to the use of hashtags We then present SVMbased classifiers that use Ngram and stylistic features such as capitalisation spelling errors etc Through our experiments we make subtle points related to a the performance of our features b how our approach compares against human ability to detect drunktexting c most discriminative stylistic features and d an error analysis that points to future work To the best of our knowledge this is a first study that shows the feasibility of textbased analysis for drunktexting prediction Past studies show the relation between alcohol abuse and unsociable behaviour such as aggression BIBREF0 crime BIBREF1 suicide attempts BIBREF2 drunk driving BIBREF3 and risky sexual behaviour BIBREF4 suicide state that those responsible for assessing cases of attempted suicide should be adept at detecting alcohol misuse Thus a drunktexting prediction system can be used to identify individuals susceptible to these behaviours or for investigative purposes after an incident Drunktexting may also cause regret Mail Goggles prompts a user to solve math questions before sending an email on weekend evenings Some Android applications avoid drunktexting by blocking outgoing texts at the click of a button However to the best of our knowledge these tools require a user command to begin blocking An ongoing textbased analysis will be more helpful especially since it offers a more natural setting by monitoring stream of social media text and not explicitly seeking user input Thus automatic drunktexting prediction will improve systems aimed to avoid regrettable drunktexting To the best of our knowledge ours is the first study that does a quantitative analysis in terms of prediction of the drunk state by using textual clues Several studies have studied linguistic traits associated with emotion expression and mental health issues suicidal nature criminal status etc BIBREF5 BIBREF6 NLP techniques have been used in the past to address social safety and mental health issues BIBREF7 Drunktexting prediction is the task of classifying a text as drunk or sober For example a tweet Feeling buzzed Cant remember how the evening went must be predicted as drunk whereas Returned from work late today the traffic was bad must be predicted as sober The challenges are We use hashtagbased supervision to create our datasets similar to tasks like emotion classification BIBREF8 The tweets are downloaded using Twitter API httpsdevtwittercom We remove nonUnicode characters and eliminate tweets that contain hyperlinks and also tweets that are shorter than 6 words in length Finally hashtags used to indicate drunk or sober tweets are removed so that they provide labels but do not act as features The dataset is available on request As a result we create three datasets each using a different strategy for sober tweets as follows The drunk tweets for Datasets 1 and 2 are the same Figure FIGREF9 shows a wordcloud for these drunk tweets with stop words and forms of the word drunk removed created using WordItOut The size of a word indicates its frequency In addition to topical words such as bar bottle and wine the wordcloud shows sentiment words such as love or damn along with profane words Heuristics other than these hashtags could have been used for dataset creation For example timestamps were a good option to account for time at which a tweet was posted However this could not be used because users local times was not available since very few users had geolocation enabled The complete set of features is shown in Table TABREF7 There are two sets of features a Ngram features and b Stylistic features We use unigrams and bigrams as Ngram features considering both presence and count Table TABREF7 shows the complete set of stylistic features of our prediction system POS ratios are a set of features that record the proportion of each POS tag in the dataset for example the proportion of nounsadjectives etc The POS tags and named entity mentions are obtained from NLTK BIBREF9 Discourse connectors are identified based on a manually created list Spelling errors are identified using a spell checker by enchant The repeated characters feature captures a situation in which a word contains a letter that is repeated three or more times as in the case of happpy Since drunktexting is often associated with emotional expression we also incorporate a set of sentimentbased features These features include countpresence of emoticons and sentiment ratio Sentiment ratio is the proportion of positive and negative words in the tweet To determine positive and negative words we use the sentiment lexicon in mpqa To identify a more refined set of words that correspond to the two classes we also estimated 20 topics for the dataset by estimating an LDA model BIBREF10 We then consider top 10 words per topic for both classes This results in 400 LDAspecific unigrams that are then used as features Using the two sets of features we train SVM classifiers BIBREF11 We show the fivefold crossvalidation performance of our features on Datasets 1 and 2 in Section SECREF17 and on Dataset H in Section SECREF21 Section SECREF22 presents an error analysis Accuracy positivenegative precision and positivenegative recall are shown as A PPNP and PRNR respectively Drunk forms the positive class while Sober forms the negative class Table TABREF14 shows the performance for fivefold crossvalidation for Datasets 1 and 2 In case of Dataset 1 we observe that Ngram features achieve an accuracy of 855 We see that our stylistic features alone exhibit degraded performance with an accuracy of 756 in the case of Dataset 1 Table TABREF16 shows top stylistic features when trained on the two datasets Spelling errors POS ratios for nouns POSNOUN length and sentiment ratios appear in both lists in addition to LDAbased unigrams However negative recall reduces to a mere 32 This degradation implies that our features capture a subset of drunk tweets and that there are properties of drunk tweets that may be more subtle When both Ngram and stylistic features are used there is negligible improvement The accuracy for Dataset 2 increases from 779 to 781 PrecisionRecall metrics do not change significantly either The best accuracy of our classifier is 781 for all features and 756 for stylistic features This shows that textbased clues can indeed be used for drunktexting prediction Using heldout dataset H we evaluate how our system performs in comparison to humans Three annotators A1A3 mark each tweet in the Dataset H as drunk or sober Table TABREF19 shows a moderate agreement between our annotators for example it is 042 for A1 and A2 Table TABREF20 compares our classifier with humans Our human annotators perform the task with an average accuracy of 688 while our classifier with all features trained on Dataset 2 reaches 64 The classifier trained on Dataset 2 is better than which is trained on Dataset 1 Some categories of errors that occur are Incorrect hashtag supervision The tweet Cant believe I lost my bag last night literally had everything in Thanks god the bar man found it was marked withDrunk However this tweet is not likely to be a drunk tweet but describes a drunk episode in retrospective Our classifier predicts it as sober Seemingly sober tweets Human annotators as well as our classifier could not identify whether Will you take her on a date But really she does like you was drunk although the author of the tweet had marked it so This example also highlights the difficulty of drunktexting prediction Pragmatic difficulty The tweet National dress of Ireland is ones one vomit my family is lovely was correctly identified by our human annotators as a drunk tweet This tweet contains an element of humour and topic change but our classifier could not capture it In this paper we introduce automatic drunktexting prediction as the task of predicting a tweet as drunk or sober First we justify the need for drunktexting prediction as means of identifying risky social behavior arising out of alcohol abuse and the need to build tools that avoid privacy leaks due to drunktexting We then highlight the challenges of drunktexting prediction one of the challenges is selection of negative examples sober tweets Using hashtagbased supervision we create three datasets annotated with drunk or sober labels We then present SVMbased classifiers which use two sets of features Ngram and stylistic features Our drunk prediction system obtains a best accuracy of 781 We observe that our stylistic features add negligible value to Ngram features We use our heldout dataset to compare how our system performs against human annotators While human annotators achieve an accuracy of 688 our system reaches reasonably close and performs with a best accuracy of 64 Our analysis of the task and experimental findings make a case for drunktexting prediction as a useful and feasible NLP application </s>
<s>[INST] Answering Complex Questions Using Open Information Extraction [/INST] Effective question answering QA systems have been a longstanding quest of AI research Structured curated KBs have been used successfully for this task BIBREF0 BIBREF1 However these KBs are expensive to build and typically domainspecific Automatically constructed open vocabulary subject predicate object style tuples have broader coverage but have only been used for simple questions where a single tuple suffices BIBREF2 BIBREF3 Our goal in this work is to develop a QA system that can perform reasoning with Open IE BIBREF4 tuples for complex multiplechoice questions that require tuples from multiple sentences Such a system can answer complex questions in resourcepoor domains where curated knowledge is unavailable Elementarylevel science exams is one such domain requiring complex reasoning BIBREF5 Due to the lack of a largescale structured KB stateoftheart systems for this task either rely on shallow reasoning with large text corpora BIBREF6 BIBREF7 or deeper structured reasoning with a small amount of automatically acquired BIBREF8 or manually curated BIBREF9 knowledge Consider the following question from an Alaska state 4th grade science test Which object in our solar system reflects light and is a satellite that orbits around one planet A Earth B Mercury C the Sun D the Moon This question is challenging for QA systems because of its complex structure and the need for multifact reasoning A natural way to answer it is by combining facts such as Moon is in the solar system Moon reflects light Moon is satellite and Moon orbits around one planet A candidate system for such reasoning and which we draw inspiration from is the TableILP system of BIBREF9 TableILP treats QA as a search for an optimal subgraph that connects terms in the question and answer via rows in a set of curated tables and solves the optimization problem using Integer Linear Programming ILP We similarly want to search for an optimal subgraph However a large automatically extracted tuple KB makes the reasoning context different on three fronts a unlike reasoning with tables chaining tuples is less important and reliable as join rules arent available b conjunctive evidence becomes paramount as unlike a long table row a single tuple is less likely to cover the entire question and c again unlike table rows tuples are noisy making combining redundant evidence essential Consequently a tableknowledge centered inference model isnt the best fit for noisy tuples To address this challenge we present a new ILPbased model of inference with tuples implemented in a reasoner called TupleInf We demonstrate that TupleInf significantly outperforms TableILP by 118 on a broad set of over 1300 science questions without requiring manually curated tables using a substantially simpler ILP formulation and generalizing well to higher grade levels The gains persist even when both solvers are provided identical knowledge This demonstrates for the first time how Open IE based QA can be extended from simple lookup questions to an effective system for complex questions We discuss two classes of related work retrievalbased web questionanswering simple reasoning with large scale KB and science questionanswering complex reasoning with small KB We first describe the tuples used by our solver We define a tuple as subject predicate objects with zero or more objects We refer to the subject predicate and objects as the fields of the tuple We use the text corpora S from BIBREF6 aristo2016combining to build our tuple KB For each test set we use the corresponding training questions Qmathit tr to retrieve domainrelevant sentences from S Specifically for each multiplechoice question qA in Qmathit tr and each choice a in A we use all nonstopword tokens in q and a as an ElasticSearch query against S We take the top 200 hits run Open IE v4 and aggregate the resulting tuples over all a in A and over all questions in Qmathit tr to create the tuple KB T Given a multiplechoice question qa with question text q and answer choices A lbrace airbrace we select the most relevant tuples from T and S as follows Selecting from Tuple KB We use an inverted index to find the 1000 tuples that have the most overlapping tokens with question tokens tokqa We also filter out any tuples that overlap only with tokq as they do not support any answer We compute the normalized TFIDF score treating the question q as a query and each tuple t as a document
textit tfx q1 textmd if x in q textit idfx log1 Nnx
textit tfidft qsum x in tcap q idfx
where N is the number of tuples in the KB and nx are the number of tuples containing x We normalize the tfidf score by the number of tokens in t and q We finally take the 50 topscoring tuples Tqa Onthefly tuples from text To handle questions from new domains not covered by the training set we extract additional tuples on the fly from S similar to BIBREF17 knowlhunting We perform the same ElasticSearch query described earlier for building T We ignore sentences that cover none or all answer choices as they are not discriminative We also ignore long sentences 300 characters and sentences with negation as they tend to lead to noisy inference We then run Open IE on these sentences and rescore the resulting tuples using the Jaccard score due to the lossy nature of Open IE and finally take the 50 topscoring tuples Tprime qa Similar to TableILP we view the QA task as searching for a graph that best connects the terms in the question qterms with an answer choice via the knowledge see Figure 1 for a simple illustrative example Unlike standard alignment models used for tasks such as Recognizing Textual Entailment RTE BIBREF18 however we must score alignments between a set Tqa cup Tprime qa of structured tuples and a potentially multisentence multiplechoice question qa The qterms answer choices and tuples fields form the set of possible vertices mathcal V of the support graph Edges connecting qterms to tuple fields and tuple fields to answer choices form the set of possible edges mathcal E The support graph GV E is a subgraph of mathcal Gmathcal V mathcal E where V and E denote active nodes and edges resp We define the desired behavior of an optimal support graph via an ILP model as follows Similar to TableILP we score the support graph based on the weight of the active nodes and edges Each edge et h is weighted based on a wordoverlap score While TableILP used WordNet BIBREF19 paths to compute the weight this measure results in unreliable scores when faced with longer phrases found in Open IE tuples Compared to a curated KB it is easy to find Open IE tuples that match irrelevant parts of the questions To mitigate this issue we improve the scoring of qterms in our ILP objective to focus on important terms Since the later terms in a question tend to provide the most critical information we scale qterm coefficients based on their position Also qterms that appear in almost all of the selected tuples tend not to be discriminative as any tuple would support such a qterm Hence we scale the coefficients by the inverse frequency of the tokens in the selected tuples Since Open IE tuples do not come with schema and join rules we can define a substantially simpler model compared to TableILP This reduces the reasoning capability but also eliminates the reliance on handauthored join rules and regular expressions used in TableILP We discovered see empirical evaluation that this simple model can achieve the same score as TableILP on the Regents test target test set used by TableILP and generalizes better to different grade levels We define active vertices and edges using ILP constraints an active edge must connect two active vertices and an active vertex must have at least one active edge To avoid positive edge coefficients in the objective function resulting in spurious edges in the support graph we limit the number of active edges from an active tuple question choice tuple fields and qterms first group of constraints in Table 1 Our model is also capable of using multiple tuples to support different parts of the question as illustrated in Figure 1 To avoid spurious tuples that only connect with the question or choice or ignore the relation being expressed in the tuple we add constraints that require each tuple to connect a qterm with an answer choice second group of constraints in Table 1 We also define new constraints based on the Open IE tuple structure Since an Open IE tuple expresses a fact about the tuples subject we require the subject to be active in the support graph To avoid issues such as Planet orbit Sun matching the sample question in the introduction Which object ldots orbits around a planet we also add an ordering constraint third group in Table 1 Its worth mentioning that TupleInf only combines parallel evidence ie each tuple must connect words in the question to the answer choice For reliable multihop reasoning using OpenIE tuples we can add intertuple connections to the support graph search controlled by a small number of rules over the OpenIE predicates Learning such rules for the Science domain is an open problem and potential avenue of future work Comparing our method with two stateoftheart systems for 4th and 8th grade science exams we demonstrate that a TupleInf with only automatically extracted tuples significantly outperforms TableILP with its original curated knowledge as well as with additional tuples and b TupleInfs complementary approach to IR leads to an improved ensemble Numbers in bold indicate statistical significance based on the Binomial exact test BIBREF20 at p005 We consider two question sets 1 4th Grade set 1220 train 1304 test is a 10x larger superset of the NY Regents questions BIBREF6 and includes professionally written licensed questions 2 8th Grade set 293 train 282 test contains 8th grade questions from various states We consider two knowledge sources The Sentence corpus S consists of domaintargeted 80K sentences and 280 GB of plain text extracted from web pages used by BIBREF6 aristo2016combining This corpus is used by the IR solver and also used to create the tuple KB T and onthefly tuples Tprime qa Additionally TableILP uses sim 70 Curated tables C designed for 4th grade NY Regents exams We compare TupleInf with two stateoftheart baselines IR is a simple yet powerful informationretrieval baseline BIBREF6 that selects the answer option with the best matching sentence in a corpus TableILP is the stateoftheart structured inference baseline BIBREF9 developed for science questions Table 2 shows that TupleInf with no curated knowledge outperforms TableILP on both question sets by more than 11 The lower half of the table shows that even when both solvers are given the same knowledge CT the improved selection and simplified model of TupleInf results in a statistically significant improvement Our simple model TupleInfC T also achieves scores comparable to TableILP on the latters target Regents questions 614 vs TableILPs reported 615 without any specialized rules Table 3 shows that while TupleInf achieves similar scores as the IR solver the approaches are complementary structured lossy knowledge reasoning vs lossless sentence retrieval The two solvers in fact differ on 473 of the training questions To exploit this complementarity we train an ensemble system BIBREF6 which as shown in the table provides a substantial boost over the individual solvers Further IR TupleInf is consistently better than IR TableILP Finally in combination with IR and the statistical association based PMI solver that scores 541 by itself of BIBREF6 aristo2016combining TupleInf achieves a score of 582 as compared to TableILPs ensemble score of 567 on the 4th grade set again attesting to TupleInfs strength We describe four classes of failures that we observed and the future work they suggest Missing Important Words Which material will spread out to completely fill a larger container Aair Bice Csand Dwater In this question we have tuples that support water will spread out and fill a larger container but miss the critical word completely An approach capable of detecting salient question words could help avoid that Lossy IE Which action is the best method to separate a mixture of salt and water The IR solver correctly answers this question by using the sentence Separate the salt and water mixture by evaporating the water However TupleInf is not able to answer this question as Open IE is unable to extract tuples from this imperative sentence While the additional structure from Open IE is useful for more robust matching converting sentences to Open IE tuples may lose important bits of information Bad Alignment Which of the following gases is necessary for humans to breathe in order to liveA OxygenB Carbon dioxideC HeliumD Water vapor TupleInf returns Carbon dioxide as the answer because of the tuple humans breathe out carbon dioxide The chunk to breathe in the question has a high alignment score to the breathe out relation in the tuple even though they have completely different meanings Improving the phrase alignment can mitigate this issue Out of scope Deer live in forest for shelter If the forest was cut down which situation would most likely happen Such questions that require modeling a state presented in the question and reasoning over the state are out of scope of our solver We presented a new QA system TupleInf that can reason over a large potentially noisy tuple KB to answer complex questions Our results show that TupleInf is a new stateoftheart structured solver for elementarylevel science that does not rely on curated knowledge and generalizes to higher grades Errors due to lossy IE and misalignments suggest future work in incorporating context and distributional measures To build the ILP model we first need to get the questions terms qterm from the question by chunking the question using an inhouse chunker based on the postagger from FACTORIE We use the SCIP ILP optimization engine BIBREF21 to optimize our ILP model To get the score for each answer choice ai we force the active variable for that choice xai to be one and use the objective function value of the ILP model as the score For evaluations we use a 2core 25 GHz Amazon EC2 linux machine with 16 GB RAM To evaluate TableILP and TupleInf on curated tables and tuples we converted them into the expected format of each solver as follows For each question we select the 7 best matching tables using the tfidf score of the table wrt the question tokens and top 20 rows from each table using the Jaccard similarity of the row with the question same as BIBREF9 tableilp2016 We then convert the table rows into the tuple structure using the relations defined by TableILP For every pair of cells connected by a relation we create a tuple with the two cells as the subject and primary object with the relation as the predicate The other cells of the table are used as additional objects to provide context to the solver We pick topscoring 50 tuples using the Jaccard score We create an additional table in TableILP with all the tuples in T Since TableILP uses fixedlength subject predicate object triples we need to map tuples with multiple objects to this format For each object Oi in the input Open IE tuple S P O1 O2 ldots we add a triple S P Oi to this table </s>
<s>[INST] An Unsupervised Word Sense Disambiguation System for Under-Resourced Languages [/INST] Word sense disambiguation WSD is a natural language processing task of identifying the particular word senses of polysemous words used in a sentence Recently a lot of attention was paid to the problem of WSD for the Russian language BIBREF0 BIBREF1 BIBREF2 This problem is especially difficult because of both linguistic issues namely the rich morphology of Russian and other Slavic languages in general and technical challenges like the lack of software and language resources required for addressing the problem To address these issues we present Watasense an unsupervised system for word sense disambiguation We describe its architecture and conduct an evaluation on three datasets for Russian The choice of an unsupervised system is motivated by the absence of resources that would enable a supervised system for underresourced languages Watasense is not strictly tied to the Russian language and can be applied to any language for which a tokenizer partofspeech tagger lemmatizer and a sense inventory are available The rest of the paper is organized as follows Section 2 reviews related work Section 3 presents the Watasense word sense disambiguation system presents its architecture and describes the unsupervised word sense disambiguation methods bundled with it Section 4 evaluates the system on a gold standard for Russian Section 5 concludes with final remarks Although the problem of WSD has been addressed in many SemEval campaigns BIBREF3 BIBREF4 BIBREF5 we focus here on word sense disambiguation systems rather than on the research methodologies Among the freely available systems IMS It Makes Sense is a supervised WSD system designed initially for the English language BIBREF6 The system uses a support vector machine classifier to infer the particular sense of a word in the sentence given its contextual sentencelevel features Pywsd is an implementation of several popular WSD algorithms implemented in a library for the Python programming language It offers both the classical Lesk algorithm for WSD and pathbased algorithms that heavily use the WordNet and similar lexical ontologies DKPro WSD BIBREF7 is a generalpurpose framework for WSD that uses a lexical ontology as the sense inventory and offers the variety of WordNetbased algorithms Babelfy BIBREF8 is a WSD system that uses BabelNet a largescale multilingual lexical ontology available for most natural languages Due to the broad coverage of BabelNet Babelfy offers entity linking as part of the WSD functionality Panchenko17emnlp present an unsupervised WSD system that is also knowledgefree its sense inventory is induced based on the JoBimText framework and disambiguation is performed by computing the semantic similarity between the context and the candidate senses BIBREF9 Pelevina16 proposed a similar approach to WSD but based on dense vector representations word embeddings called SenseGram Similarly to SenseGram our WSD system is based on averaging of word embeddings on the basis of an automatically induced sense inventory A crucial difference however is that we induce our sense inventory from synonymy dictionaries and not distributional word vectors While this requires more manually created resources a potential advantage of our approach is that the resulting inventory contains less noise Watasense is implemented in the Python programming language using the scikitlearn BIBREF10 and Gensim BIBREF11 libraries Watasense offers a Web interface Figure FIGREF2 a commandline tool and an application programming interface API for deployment within other applications A sentence is represented as a list of spans A span is a quadruple INLINEFORM0 where INLINEFORM1 is the word or the token INLINEFORM2 is the part of speech tag INLINEFORM3 is the lemma INLINEFORM4 is the position of the word in the sentence These data are provided by tokenizer partofspeech tagger and lemmatizer that are specific for the given language The WSD results are represented as a map of spans to the corresponding word sense identifiers The sense inventory is a list of synsets A synset is represented by three bag of words the synonyms the hypernyms and the union of two former the bag Due to the performance reasons on initialization an inverted index is constructed to map a word to the set of synsets it is included into Each word sense disambiguation method extends the BaseWSD class This class provides the end user with a generic interface for WSD and also encapsulates common routines for data preprocessing The inherited classes like SparseWSD and DenseWSD should implement the disambiguateword method that disambiguates the given word in the given sentence Both classes use the bag representation of synsets on the initialization As the result for WSD not just the synonyms are used but also the hypernyms corresponding to the synsets The UML class diagram is presented in Figure FIGREF4 Watasense supports two sources of word vectors it can either read the word vector dataset in the binary Word2Vec format or use Word2VecPyro4 a generalpurpose word vector server The use of a remote word vector server is recommended due to the reduction of memory footprint per each Watasense process FIGREF2 shows the Web interface of Watasense It is composed of two primary activities The first is the text input and the method selection FIGREF2 The second is the display of the disambiguation results with part of speech highlighting FIGREF7 Those words with resolved polysemy are underlined the tooltips with the details are raised on hover We use two different unsupervised approaches for word sense disambiguation The first called sparse model uses a straightforward sparse vector space model as widely used in Information Retrieval to represent contexts and synsets The second called dense model represents synsets and contexts in a dense lowdimensional space by averaging word embeddings In the vector space model approach we follow the sparse contextbased disambiguated method BIBREF12 BIBREF13 For estimating the sense of the word INLINEFORM0 in a sentence we search for such a synset INLINEFORM1 that maximizes the cosine similarity to the sentence vector DISPLAYFORM0 where INLINEFORM0 is the set of words forming the synset INLINEFORM1 is the set of words forming the sentence On initialization the synsets represented in the sense inventory are transformed into the INLINEFORM2 weighted wordsynset sparse matrix efficiently represented in the memory using the compressed sparse row format Given a sentence a similar transformation is done to obtain the sparse vector representation of the sentence in the same space as the wordsynset matrix Then for each word to disambiguate we retrieve the synset containing this word that maximizes the cosine similarity between the sparse sentence vector and the sparse synset vector Let INLINEFORM3 be the maximal number of synsets containing a word and INLINEFORM4 be the maximal size of a synset Therefore disambiguation of the whole sentence INLINEFORM5 requires INLINEFORM6 operations using the efficient sparse matrix representation In the synset embeddings model approach we follow SenseGram BIBREF14 and apply it to the synsets induced from a graph of synonyms We transform every synset into its dense vector representation by averaging the word embeddings corresponding to each constituent word DISPLAYFORM0 where INLINEFORM0 denotes the word embedding of INLINEFORM1 We do the same transformation for the sentence vectors Then given a word INLINEFORM2 a sentence INLINEFORM3 we find the synset INLINEFORM4 that maximizes the cosine similarity to the sentence DISPLAYFORM0 On initialization we precompute the dense synset vectors by averaging the corresponding word embeddings Given a sentence we similarly compute the dense sentence vector by averaging the vectors of the words belonging to nonauxiliary parts of speech ie nouns adjectives adverbs verbs etc Then given a word to disambiguate we retrieve the synset that maximizes the cosine similarity between the dense sentence vector and the dense synset vector Thus given the number of dimensions INLINEFORM0 disambiguation of the whole sentence INLINEFORM1 requires INLINEFORM2 operations We conduct our experiments using the evaluation methodology of SemEval 2010 Task 14 Word Sense Induction Disambiguation BIBREF5 In the gold standard each word is provided with a set of instances ie the sentences containing the word Each instance is manually annotated with the single sense identifier according to a predefined sense inventory Each participating system estimates the sense labels for these ambiguous words which can be viewed as a clustering of instances according to sense labels The systems clustering is compared to the goldstandard clustering for evaluation The original SemEval 2010 Task 14 used the VMeasure external clustering measure BIBREF5 However this measure is maximized by clustering each sentence into his own distinct cluster ie a dummy singleton baseline This is achieved by the system deciding that every ambiguous word in every sentence corresponds to a different word sense To cope with this issue we follow a similar study BIBREF1 and use instead of the adjusted Rand index ARI proposed by Hubert85 as an evaluation measure In order to provide the overall value of ARI we follow the addition approach used in BIBREF1 Since the quality measure is computed for each lemma individually the total value is a weighted sum namely DISPLAYFORM0 where INLINEFORM0 is the lemma INLINEFORM1 is the set of the instances for the lemma INLINEFORM2 INLINEFORM3 is the adjusted Rand index computed for the lemma INLINEFORM4 Thus the contribution of each lemma to the total score is proportional to the number of instances of this lemma We evaluate the word sense disambiguation methods in Watasense against three baselines an unsupervised approach for learning multiprototype word embeddings called AdaGram BIBREF15 same sense for all the instances per lemma One and one sense per instance Singletons The AdaGram model is trained on the combination of RuWac LibRu and the Russian Wikipedia with the overall vocabulary size of 2 billion tokens BIBREF1 As the goldstandard dataset we use the WSD training dataset for Russian created during RUSSE2018 A Shared Task on Word Sense Induction and Disambiguation for the Russian Language BIBREF16 The dataset has 31 words covered by INLINEFORM0 instances in the btsrnc subset and 5 words covered by 439 instances in the wikiwiki subset The following different sense inventories have been used during the evaluation leftmargin4mm Watlink a word sense network constructed automatically It uses the synsets induced in an unsupervised way by the WatsetCWnolog MCL method BIBREF2 and the semantic relations from such dictionaries as Wiktionary referred as Joint INLINEFORM0 Exp INLINEFORM1 SWN in Ustalov17dialogue This is the only automatically built inventory we use in the evaluation RuThes a largescale lexical ontology for Russian created by a group of expert lexicographers BIBREF17 RuWordNet a semiautomatic conversion of the RuThes lexical ontology into a WordNetlike structure BIBREF18 Since the Dense model requires word embeddings we used the 500dimensional word vectors from the Russian Distributional Thesaurus BIBREF19 These vectors are obtained using the Skipgram approach trained on the librusec text corpus We compare the evaluation results obtained for the Sparse and Dense approaches with three baselines the AdaGram model AdaGram the same sense for all the instances per lemma One and one sense per instance Singletons The evaluation results are presented in Table TABREF25 The columns btsrnc and wikiwiki represent the overall value of ARI according to Equation EQREF15 The column Avg consists of the weighted average of the datasets wrt the number of instances We observe that the SenseGrambased approach for word sense disambiguation yields substantially better results in every case Table TABREF25 The primary reason for that is the implicit handling of similar words due to the averaging of dense word vectors for semantically related words Thus we recommend using the dense approach in further studies Although the AdaGram approach trained on a large text corpus showed better results according to the weighted average this result does not transfer to languages with less available corpus size In this paper we presented Watasense an open source unsupervised word sense disambiguation system that is parameterized only by a word sense inventory It supports both sparse and dense sense representations We were able to show that the dense approach substantially boosts the performance of the sparse approach on three different sense inventories for Russian We recommend using the dense approach in further studies due to its smoothing capabilities that reduce sparseness In further studies we will look at the problem of phrase neighbors that influence the sentence vector representations Finally we would like to emphasize the fact that Watasense has a simple API for integrating different algorithms for WSD At the same time it requires only a basic set of language processing tools to be available tokenizer a partofspeech tagger lemmatizer and a sense inventory which means that lowresourced language can benefit of its usage We acknowledge the support of the Deutsche Forschungsgemeinschaft DFG under the project Joining Ontologies and Semantics Induced from Text JOINT the RFBR under the projects no 163700203 mola and no 163700354 mola and the RFH under the project no 160412019 The research was supported by the Ministry of Education and Science of the Russian Federation Agreement no 02A03210006 The calculations were carried out using the supercomputer Uran at the Krasovskii Institute of Mathematics and Mechanics </s>
<s>[INST] Quasar: Datasets for Question Answering by Search and Reading [/INST] Factoid Question Answering QA aims to extract answers from an underlying knowledge source to information seeking questions posed in natural language Depending on the knowledge source available there are two main approaches for factoid QA Structured sources including Knowledge Bases KBs such as Freebase BIBREF1 are easier to process automatically since the information is organized according to a fixed schema In this case the question is parsed into a logical form in order to query against the KB However even the largest KBs are often incomplete BIBREF2 BIBREF3 and hence can only answer a limited subset of all possible factoid questions For this reason the focus is now shifting towards unstructured sources such as Wikipedia articles which hold a vast quantity of information in textual form and in principle can be used to answer a much larger collection of questions Extracting the correct answer from unstructured text is however challenging and typical QA pipelines consist of the following two components 1 searching for the passages relevant to the given question and 2 reading the retrieved text in order to select a span of text which best answers the question BIBREF4 BIBREF5 Like most other language technologies the current research focus for both these steps is firmly on machine learning based approaches for which performance improves with the amount of data available Machine reading performance in particular has been significantly boosted in the last few years with the introduction of largescale reading comprehension datasets such as CNN DailyMail BIBREF6 and Squad BIBREF7 Stateoftheart systems for these datasets BIBREF8 BIBREF9 focus solely on step 2 above in effect assuming the relevant passage of text is already known In this paper we introduce two new datasets for QUestion Answering by Search And Reading Quasar The datasets each consist of factoid questionanswer pairs and a corresponding large background corpus to facilitate research into the combined problem of retrieval and comprehension QuasarS consists of 37362 clozestyle questions constructed from definitions of software entities available on the popular website Stack Overflow The answer to each question is restricted to be another software entity from an output vocabulary of 4874 entities QuasarT consists of 43013 trivia questions collected from various internet sources by a trivia enthusiast The answers to these questions are freeform spans of text though most are noun phrases While production quality QA systems may have access to the entire world wide web as a knowledge source for Quasar we restrict our search to specific background corpora This is necessary to avoid uninteresting solutions which directly extract answers from the sources from which the questions were constructed For QuasarS we construct the knowledge source by collecting top 50 threads tagged with each entity in the dataset on the Stack Overflow website For QuasarT we use ClueWeb09 BIBREF0 which contains about 1 billion web pages collected between January and February 2009 Figure 1 shows some examples Unlike existing reading comprehension tasks the Quasar tasks go beyond the ability to only understand a given passage and require the ability to answer questions given large corpora Prior datasets such as those used in BIBREF4 are constructed by first selecting a passage and then constructing questions about that passage This design intentionally ignores some of the subproblems required to answer opendomain questions from corpora namely searching for passages that may contain candidate answers and aggregating informationresolving conflicts between candidates from many passages The purpose of Quasar is to allow research into these subproblems and in particular whether the search step can benefit from integration and joint training with downstream reading systems Additionally QuasarS has the interesting feature of being a closeddomain dataset about computer programming and successful approaches to it must develop domainexpertise and a deep understanding of the background corpus To our knowledge it is one of the largest closeddomain QA datasets available QuasarT on the other hand consists of opendomain questions based on trivia which refers to bits of information often of little importance Unlike previous opendomain systems which rely heavily on the redundancy of information on the web to correctly answer questions we hypothesize that QuasarT requires a deeper reading of documents to answer correctly We evaluate Quasar against human testers as well as several baselines ranging from nave heuristics to stateoftheart machine readers The best performing baselines achieve 336 and 285 on QuasarS and QuasarT while human performance is 50 and 606 respectively For the automatic systems we see an interesting tension between searching and reading accuracies retrieving more documents in the search phase leads to a higher coverage of answers but makes the comprehension task more difficult We also collect annotations on a subset of the development set questions to allow researchers to analyze the categories in which their system performs well or falls short We plan to release these annotations along with the datasets and our retrieved documents for each question Each dataset consists of a collection of records with one QA problem per record For each record we include some question text a context document relevant to the question a set of candidate solutions and the correct solution In this section we describe how each of these fields was generated for each Quasar variant The software question set was built from the definitional excerpt entry for each tag entity on StackOverflow For example the excerpt for the java tag is Java is a generalpurpose objectoriented programming language designed to be used in conjunction with the Java Virtual Machine JVM Not every excerpt includes the tag being defined which we will call the head tag so we prepend the head tag to the front of the string to guarantee relevant results later on in the pipeline We then completed preprocessing of the software questions by downcasing and tokenizing the string using a custom tokenizer compatible with special characters in software terms eg net c Each preprocessed excerpt was then converted to a series of cloze questions using a simple heuristic first searching the string for mentions of other entities then repleacing each mention in turn with a placeholder string Figure 2 This heuristic is noisy since the software domain often overloads existing English words eg can may refer to a Controller Area Network bus swap may refer to the temporary storage of inactive pages of memory on disk using may refer to a namespacing keyword To improve precision we scored each cloze based on the relative incidence of the term in an English corpus versus in our StackOverflow one and discarded all clozes scoring below a threshold This means our dataset does not include any cloze questions for terms which are common in English such as can swap and using but also image service and packet A more sophisticated entity recognition system could make recall improvements here The trivia question set was built from a collection of just under 54000 trivia questions collected by Reddit user 007craft and released in December 2015 The raw dataset was noisy having been scraped from multiple sources with variable attention to detail in formatting spelling and accuracy We filtered the raw questions to remove unparseable entries as well as any TrueFalse or multiple choice questions for a total of 52000 freeresponse style questions remaining The questions range in difficulty from straightforward Who recorded the song Rocket Man Elton John to difficult What was Robin Williams paid for Disneys Aladdin in 1982 Scale 485 day Picasso Painting to debatable According to Earth Medicine whats the birth totem for march The Falcon The context document for each record consists of a list of ranked and scored pseudodocuments relevant to the question Context documents for each query were generated in a twophase fashion first collecting a large pool of semirelevant text then filling a temporary index with short or long pseudodocuments from the pool and finally selecting a set of N topranking pseudodocuments 100 short or 20 long from the temporary index For QuasarS the pool of text for each question was composed of 50 questionandanswer threads scraped from httpstackoverflowcom StackOverflow keeps a running tally of the topvoted questions for each tag in their knowledge base we used Scrapy to pull the top 50 question posts for each tag along with any answerpost responses and metadata tags authorship comments From each thread we pulled all text not marked as code and split it into sentences using the Stanford NLP sentence segmenter truncating sentences to 2048 characters Each sentence was marked with a thread identifier a post identifier and the tags for the thread Long pseudodocuments were either the full post in the case of question posts or the full post and its head question in the case of answer posts comments included Short pseudodocuments were individual sentences To build the context documents for QuasarS the pseudodocuments for the entire corpus were loaded into a diskbased lucene index each annotated with its thread ID and the tags for the thread This index was queried for each cloze using the following lucene syntax noitemsep SHOULDPHRASEquestion text SHOULDBOOLEANquestion text MUSTtagsheadtag where question text refers to the sequence of tokens in the cloze question with the placeholder removed The first SHOULD term indicates that an exact phrase match to the question text should score highly The second SHOULD term indicates that any partial match to tokens in the question text should also score highly roughly in proportion to the number of terms matched The MUST term indicates that only pseudodocuments annotated with the head tag of the cloze should be considered The top 100N pseudodocuments were retrieved and the top N unique pseudodocuments were added to the context document along with their lucene retrieval score Any questions showing zero results for this query were discarded For QuasarT the pool of text for each question was composed of 100 HTML documents retrieved from ClueWeb09 Each questionanswer pair was converted to a combine query in the Indri query language to comply with the ClueWeb09 batch query service using simple regular expression substitution rules to remove sg or replace s g illegal characters Any questions generating syntax errors after this step were discarded We then extracted the plaintext from each HTML document using Jericho For long pseudodocuments we used the full page text truncated to 2048 characters For short pseudodocuments we used individual sentences as extracted by the Stanford NLP sentence segmenter truncated to 200 characters To build the context documents for the trivia set the pseudodocuments from the pool were collected into an inmemory lucene index and queried using the question text only the answer text was not included for this step The structure of the query was identical to the query for QuasarS without the head tag filter noitemsep SHOULDPHRASEquestion text SHOULDBOOLEANquestion text The top 100N pseudodocuments were retrieved and the top N unique pseudodocuments were added to the context document along with their lucene retrieval score Any questions showing zero results for this query were discarded The list of candidate solutions provided with each record is guaranteed to contain the correct answer to the question QuasarS used a closed vocabulary of 4874 tags as its candidate list Since the questions in QuasarT are in freeresponse format we constructed a separate list of candidate solutions for each question Since most of the correct answers were noun phrases we took each sequence of NN tagged tokens in the context document as identified by the Stanford NLP Maxent POS tagger as the candidate list for each record If this list did not include the correct answer it was added to the list Once context documents had been built we extracted the subset of questions where the answer string excluded from the query for the twophase search was nonetheless present in the context document This subset allows us to evaluate the performance of the reading system independently from the search system while the full set allows us to evaluate the performance of Quasar as a whole We also split the full set into training validation and test sets The final size of each data subset after all discards is listed in Table 1 Evaluation is straightforward on QuasarS since each answer comes from a fixed output vocabulary of entities and we report the average accuracy of predictions as the evaluation metric For QuasarT the answers may be free form spans of text and the same answer may be expressed in different terms which makes evaluation difficult Here we pick the two metrics from BIBREF7 BIBREF19 In preprocessing the answer we remove punctuation whitespace and definite and indefinite articles from the strings Then exact match measures whether the two strings after preprocessing are equal or not For F1 match we first construct a bag of tokens for each string followed be preprocessing of each token and measure the F1 score of the overlap between the two bags of tokens These metrics are far from perfect for QuasarT for example our human testers were penalized for entering 0 as answer instead of zero However a comparison between systems may still be meaningful To put the difficulty of the introduced datasets into perspective we evaluated human performance on answering the questions For each dataset we recruited one domain expert a developer with several years of programming experience for QuasarS and an avid trivia enthusiast for QuasarT and 13 nonexperts Each volunteer was presented with randomly selected questions from the development set and asked to answer them via an online app The experts were evaluated in a closedbook setting ie they did not have access to any external resources The nonexperts were evaluated in an openbook setting where they had access to a search engine over the short pseudodocuments extracted for each dataset as described in Section Context Retrieval We decided to use short pseudodocuments for this exercise to reduce the burden of reading on the volunteers though we note that the long pseudodocuments have greater coverage of answers We also asked the volunteers to provide annotations to categorize the type of each question they were asked and a label for whether the question was ambiguous For QuasarS the annotators were asked to mark the relation between the head entity from whose definition the cloze was constructed and the answer entity For QuasarT the annotators were asked to mark the genre of the question eg Arts Literature and the entity type of the answer eg Person When multiple annotators marked the same question differently we took the majority vote when possible and discarded ties In total we collected 226 relation annotations for 136 questions in QuasarS out of which 27 were discarded due to conflicting ties leaving a total of 109 annotated questions For QuasarT we collected annotations for a total of 144 questions out of which 12 we marked as ambiguous In the remaining 132 a total of 214 genres were annotated a question could be annotated with multiple genres while 10 questions had conflicting entitytype annotations which we discarded leaving 122 total entitytype annotations Figure 3 shows the distribution of these annotations We evaluate several baselines on Quasar ranging from simple heuristics to deep neural networks Some predict a single token entity as the answer while others predict a span of tokens MFi Maximum Frequency counts the number of occurrences of each candidate answer in the retrieved context and returns the one with maximum frequency MFe is the same as MFi except it excludes the candidates present in the query WD Word Distance measures the sum of distances from a candidate to other nonstopword tokens in the passage which are also present in the query For the clozestyle QuasarS the distances are measured by first aligning the query placeholder to the candidate in the passage and then measuring the offsets between other tokens in the query and their mentions in the passage The maximum distance for any token is capped at a specified threshold which is tuned on the validation set For QuasarT we also test the Sliding Window SW and Sliding Window Distance SWD baselines proposed in BIBREF13 The scores were computed for the list of candidate solutions described in Section Context Retrieval For QuasarS since the answers come from a fixed vocabulary of entities we test language model baselines which predict the most likely entity to appear in a given context We train three ngram baselines using the SRILM toolkit BIBREF21 for n345 on the entire corpus of all Stack Overflow posts The output predictions are restricted to the output vocabulary of entities We also train a bidirectional Recurrent Neural Network RNN language model based on GRU units This model encodes both the left and right context of an entity using forward and backward GRUs and then concatenates the final states from both to predict the entity through a softmax layer Training is performed on the entire corpus of Stack Overflow posts with the loss computed only over mentions of entities in the output vocabulary This approach benefits from looking at both sides of the cloze in a query to predict the entity as compared to the singlesided ngram baselines Reading comprehension models are trained to extract the answer from the given passage We test two recent architectures on Quasar using publicly available code from the authors The GA Reader BIBREF8 is a multilayer neural network which extracts a single token from the passage to answer a given query At the time of writing it had stateoftheart performance on several clozestyle datasets for QA For QuasarS we train and test GA on all instances for which the correct answer is found within the retrieved context For QuasarT we train and test GA on all instances where the answer is in the context and is a single token The BiDAF model BIBREF9 is also a multilayer neural network which predicts a span of text from the passage as the answer to a given query At the time of writing it had stateoftheart performance among published models on the Squad dataset For QuasarT we train and test BiDAF on all instances where the answer is in the retrieved context Several baselines rely on the retrieved context to extract the answer to a question For these we refer to the fraction of instances for which the correct answer is present in the context as Search Accuracy The performance of the baseline among these instances is referred to as the Reading Accuracy and the overall performance which is a product of the two is referred to as the Overall Accuracy In Figure 4 we compare how these three vary as the number of context documents is varied Naturally the search accuracy increases as the context size increases however at the same time reading performance decreases since the task of extracting the answer becomes harder for longer documents Hence simply retrieving more documents is not sufficient finding the few most relevant ones will allow the reader to work best In Tables 2 and 3 we compare all baselines when the context size is tuned to maximize the overall accuracy on the validation set For QuasarS the best performing baseline is the BiRNN language model which achieves 336 accuracy The GA model achieves 483 accuracy on the set of instances for which the answer is in context however a search accuracy of only 65 means its overall performance is lower This can improve with improved retrieval For QuasarT both the neural models significantly outperform the heuristic models with BiDAF getting the highest F1 score of 285 The best performing baselines however lag behind human performance by 164 and 321 for QuasarS and QuasarT respectively indicating the strong potential for improvement Interestingly for human performance we observe that nonexperts are able to match or beat the performance of experts when given access to the background corpus for searching the answers We also emphasize that the human performance is limited by either the knowledge of the experts or the usefulness of the search engine for nonexperts it should not be viewed as an upper bound for automatic systems which can potentially use the entire background corpus Further analysis of the human and baseline performance in each category of annotated questions is provided in Appendix Performance Analysis We have presented the Quasar datasets for promoting research into two related tasks for QA searching a large corpus of text for relevant passages and reading the passages to extract answers We have also described baseline systems for the two tasks which perform reasonably but lag behind human performance While the searching performance improves as we retrieve more context the reading performance typically goes down Hence future work in addition to improving these components individually should also focus on joint approaches to optimizing the two on endtask performance The datasets including the documents retrieved by our system and the human annotations are available at httpsgithubcombdhingraquasar This work was funded by NSF under grants CCF1414030 and IIS1250956 and by grants from Google Table 4 includes the definition of all the annotated relations for QuasarS Figure 5 shows a comparison of the human performance with the best performing baseline for each category of annotated questions We see consistent differences between the two except in the following cases For QuasarS BiRNN performs comparably to humans for the developedwith and runson categories but much worse in the hascomponent and isa categories For QuasarT BiDAF performs comparably to humans in the sports category but much worse in history religion and language or when the answer type is a number or datetime </s>
<s>[INST] Error Analysis for Vietnamese Named Entity Recognition on Deep Neural Network Models [/INST] Named Entity Recognition NER is one of information extraction subtasks that is responsible for detecting entity elements from raw text and can determine the category in which the element belongs these categories include the names of persons organizations locations expressions of times quantities monetary values and percentages The problem of NER is described as follow Input A sentence S consists a sequence of n words S w1w2w3wn wi the ith word Output The sequence of n labels y1y2y3yn Each yi label represents the category which wi belongs to For example given a sentence Input vietnamGim c iu hnh Tim Cook ca Apple va gii thiu 2 in thoi iPhone ng h thng minh mi ln hn s kin Flint Center Cupertino Apple CEO Tim Cook introduces 2 new larger iPhones Smart Watch at Cupertino Flint Center event The algorithm will output Output vietnamOGim c iu hnhO PERTim CookPER OcaO ORGAppleORG Ova gii thiu 2 in thoi iPhone ng h thng minh mi ln hn s kinO ORGFlint CenterORG LOCCupertinoLOC With LOC PER ORG is Name of location person organization respectively Note that O means Other Not a Name entity We will not denote the O label in the following examples in this article because we only care about name of entities In this paper we analyze common errors of the previous stateoftheart techniques using Deep Neural Network DNN on VLSP Corpus This may contribute to the later researchers the common errors from the results of these stateoftheart models then they can rely on to improve the model Section 2 discusses the related works to this paper We will present a method for evaluating and analyzing the types of errors in Section 3 The data used for testing and analysis of errors will be introduced in Section 4 we also talk about deep neural network methods and pretrained word embeddings for experimentation in this section Section 5 will detail the errors and evaluations In the end is our contribution to improve the above errors Previously publicly available NER systems do not use DNN for example the MITRE Identification Scrubber Toolkit MIST BIBREF0 Stanford NER BIBREF1 BANNER BIBREF2 and NERsuite BIBREF3 NER systems for Vietnamese language processing used traditional machine learning methods such as Maximum Entropy Markov Model MEMM Support Vector Machine SVM and Conditional Random Field CRF In particular most of the toolkits for NER task attempted to use MEMM BIBREF4 and CRF BIBREF5 to solve this problem Nowadays because of the increase in data DNN methods are used a lot They have archived great results when it comes to NER tasks for example Guillaume Lample et al with BLSTMCRF in BIBREF6 report 9094 F1 score Chiu et al with BLSTMCNN in BIBREF7 got 9162 F1 score Xeuzhe Ma and Eduard Hovy with BLSTMCNNCRF in BIBREF8 achieved F1 score of 9121 ThaiHoang Pham and Phuong LeHong with BLSTMCNNCRF in BIBREF9 got 8859 F1 score These DNN models are also the stateoftheart models The results of our analysis experiments are reported in precision and recall over all labels name of person location organization and miscellaneous The process of analyzing errors has 2 steps Step 1 We use two stateoftheart models including BLSTMCNNCRF and BLSTMCRF to train and test on VLSPs NER corpus In our experiments we implement word embeddings as features to the two systems Step 2 Based on the best results BLSTMCNNCRF error analysis is performed based on five types of errors No extraction No annotation Wrong range Wrong tag Wrong range and tag in a way similar to BIBREF10 but we analyze on both gold labels and predicted labels more detail in figure 1 and 2 A token an entity name maybe contain more than one word will be extracted as a correct entity by the model if both of the followings are correct The length of it range is correct The word beginning and the end is the same as gold data annotator The label tag of it is correct The label is the same as in gold data If it is not meet two above requirements it will be the wrong entity an error Therefore we divide the errors into five different types which are described in detail as follows No extraction The error where the model did not extract tokens as a name entity NE though the tokens were annotated as a NE LSTMCNNCRF vietnam VitNam Annotator vietnamLOC VitNam LOC No annotation The error where the model extracted tokens as an NE though the tokens were not annotated as a NE LSTMCNNCRF vietnamPER Chu u PER Annotator vietnamChu u Wrong range The error where the model extracted tokens as an NE and only the range was wrong The extracted tokens were partially annotated or they were the part of the annotated tokens LSTMCNNCRF vietnamPER Cas Nguyn Vn A PER Annotator vietnamCas PER Nguyn Vn A PER Wrong tag The error where the model extracted tokens as an NE and only the tag type was wrong LSTMCNNCRF vietnamKhm ph PER Yangsuri PER Annotator vietnamKhm ph LOC Yangsuri LOC Wrong range and tag The error where the model extracted tokens as an NE and both the range and the tag type were wrong LSTMCNNCRF vietnamLOC gianhng Apple LOC Annotator vietnamgianhng ORG Apple ORG We compare the predicted NEs to the gold NEs Fig 1 if they have the same range the predicted NE is a correct or Wrong tag If it has different range with the gold NE we will see what type of wrong it is If it does not have any overlap it is a No extraction If it has an overlap and the tag is the same at gold NE it is a Wrong range Finally it is a Wrong range and tag if it has an overlap but the tag is different The steps in Fig 2 is the same at Fig 1 and the different only is we compare the gold NE to the predicted NE and No extraction type will be No annotation To conduct error analysis of the model we used the corpus which are provided by VLSP 2016 Named Entity Recognition The dataset contains four different types of label Location LOC Person PER Organization ORG and Miscellaneous Name of an entity that do not belong to 3 types above Table TABREF15 Although the corpus has more information about the POS and chunks but we do not use them as features in our model There are two folders with 267 text files of training data and 45 text files of test data They all have their own format We take 21 first text files and 22 last text files and 22 sentences of the 22th text file and 55 sentences of the 245th text file to be a development data The remaining files are going to be the training data The test file is the same at the file VSLP gave Finally we have 3 text files only based on the CoNLL 2003 format train dev and test We use the word embeddings for Vietnamese that created by Kyubyong Park and Edouard Grave at al Kyubyong Park In his project he uses two methods including fastText and word2vec to generate word embeddings from wikipedia database backup dumps His word embedding is the vector of 100 dimension and it has about 10k words Edouard Grave et al BIBREF11 They use fastText tool to generate word embeddings from Wikipedia The format is the same at Kyubyongs but their embedding is the vector of 300 dimension and they have about 200k words Based on stateoftheart methods for NER BLSTMCNNCRF is the endtoend deep neural network model that achieves the best result on Fscore BIBREF9 Therefore we decide to conduct the experiment on this model and analyze the errors We run experiment with the Ma and Hovy 2016 model BIBREF8 source code provided by Motoki Sato and analysis the errors from this result Before we decide to analysis on this result we have run some other methods but this one with Vietnamese pretrained word embeddings provided by Kyubyong Park obtains the best result Other results are shown in the Table 2 Table 2 shows our experiments on two models with and without different pretrained word embedding KP means the Kyubyong Parks pretrained word embeddings and EG means Edouard Graves pretrained word embeddings We compare the outputs of BLSTMCNNCRF model predicted to the annotated data gold and analyzed the errors Table 3 shows perfomance of the BLSTMCNNCRF model In our experiments we use three evaluation parameters precision recall and F1 score to access our experimental result They will be described as follow in Table 3 The correctNE the number of correct label for entity that the model can found The goldNE number of the real label annotated by annotator in the gold data The foundNE number of the label the model find out no matter if they are correct or not In Table 3 above we can see that recall score on ORG label is lowest The reason is almost all the ORG label on test file is name of some brands that do not appear on training data and pretrained word embedding On the other side the characters inside these brand names also inside the other names of person in the training data The context from both side of the sentence future and pastfeature also make the model think the name entity not as it should be Table 4 shows that the biggest number of errors is No extraction The errors were counted by using logical sum OR of the gold labels and predicted labels predicted by the model The second most frequent error was Wrong tag means the model extract its a NE but wrong tag First of all we will compare the predicted NEs to the gold NEs Fig 1 Table 4 shows the summary of errors by types based on the gold labels the correct is the number of gold tag that the model predicted correctly error is the number of gold tag that the model predicted incorrectly and total is sum of them Four columns next show the number of type errors on each label Table 5 shows that Person Location and Organization is the main reason why No extraction and Wrong tag are high After analyzing based on the gold NEs we figure out the reason is Almost all the NEs is wrong they do not appear on training data and pretrained embedding These NEs vector will be initial randomly therefore these vectors are poor which means have no semantic aspect The weird ORG NE in the sentence appear together with other words have context of PER so this weird ORG NE is going to be label at PER For example gold data vietnamVV c xem l utin k hpng qungco l vs PER Trn Quang H PER sau khi ot HCV taekwondo Asiad LOC Hiroshima LOC The athlete is considered the first to sign a contract of boxing Tran Quang Ha after winning the gold medal Asiad Hiroshima predicted data vietnaml vs PERTrn Quang HPER sau khi ot HCV taekwondo Asiad PERHiroshimaPER Some mistakes of the model are from training set for example anonymous person named P appears many times in the training set so when model meets P in context of P 3 vietnamQun 9 Ward 3 District 9 P stands for vietnamPhng Ward model will predict P as a PER Training data vietnamnu PERPPER c y If P were here Predicted data vietnamPERP 3PER Gvp Ward 3 Govap District Table 6 shows the summary of errors by types based on the predicted data After analyzing the errors on predicted and gold data we noticed that the difference of these errors are mainly in the No anotation and No extraction Therefore we only mention the main reasons for the No anotation Most of the wrong labels that model assigns are brand names Ex Charriol Dream Jupiter words are abbreviated vietnamXKLD xut khu lao ng labour export movie names All of these words do not appear in training data and word embedding Perhaps these reasons are the followings The vectors of these words are random so the semantic aspect is poor The hidden states of these words also rely on past feature forward pass and future feature backward pass of the sentence Therefore they are assigned wrongly because of their context These words are primarily capitalized or all capital letters so they are assigned as a name entity This error is caused by the CNN layer extract characters information of the word Table 7 shows the detail of errors on predicted data where we will see number kind of errors on each label After considering the training and test data we realized that this data has many problems need to be fixed in the next run experiments The annotators are not consistent between the training data and the test data more details are shown as follow The organizations are labeled in the train data but not labeled in the test data Training data vietnamORG S Yt ORG Department of Health Test data vietnamS Yt Department of Health Explanation vietnamS Yt in train and test are the same name of organization entity However the one in test data is not labeled The entity has the same meaning but is assigned differently between the train data and the test Training data vietnamMISC ngi Vit MISC Vietnamese people Test data vietnamdn LOC Vit LOC Vietnamese people Explanation vietnamBoth ngi Vit in train data and dn Vit in test data are the same meaning but they are assigned differently The range of entities are differently between the train data and the test data Training data vietnamLOC lng Atu LOC Atu village Test data vietnamlng LOC HnQuc LOC Korea village Explanation The two villages differ only in name but they are labeled differently in range Capitalization rules are not unified with a token is considered an entity Training data vietnamORG Cngty Inmasco ORG Inmasco Company Training data vietnamcngty con Subsidiaries Test data vietnamcngty ORG Yeon Young Entertainment ORG Yeon Young Entertainment company Explanation If it comes to a company with a specific name it should be labeled vietnamORG Cngty Yeon Young Entertainment ORG with C in capital letters In this paper we have presented a thorough study of distinctive error distributions produced by BiLSTMCNNCRF for the Vietnamese language This would be helpful for researchers to create better NER models Based on the analysis results we suggest some possible directions for improvement of model and for the improvement of datadriven NER for the Vietnamese language in future The word at the begin of the sentence is capitalized so if the name of person is at this position model will ignore them no extraction To improve this issue we can use the POS feature together with BIO format Inside Outside Beginning BIBREF6 at the top layer CRF If we can unify the labeling of the annotators between the train dev and test sets We will improve data quality and classifier It is better if there is a pretrained word embeddings that overlays the data and segmentation algorithm need to be more accurately </s>
<s>[INST] Recurrent Neural Network Encoder with Attention for Community Question Answering [/INST] Community question answering cQA is a paradigm that provides forums for users to ask or answer questions on any topic with barely any restrictions In the past decade these websites have attracted a great number of users and have accumulated a large collection of questioncomment threads generated by these users However the low restriction results in a high variation in answer quality which makes it timeconsuming to search for useful information from the existing content It would therefore be valuable to automate the procedure of ranking related questions and comments for users with a new question or when looking for solutions from comments of an existing question Automation of cQA forums can be divided into three tasks questioncomment relevance Task A questionquestion relevance Task B and questionexternal comment relevance Task C One might think that classic retrieval models like language models for information retrieval BIBREF0 could solve these tasks However a big challenge for cQA tasks is that users are used to expressing similar meanings with different words which creates gaps when matching questions based on common words Other challenges include informal usage of language highly diverse content of comments and variation in the length of both questions and comments To overcome these issues most previous work eg SemEval 2015 BIBREF1 relied heavily on additional features and reasoning capabilities In BIBREF2 a neural attentionbased model was proposed for automatically recognizing entailment relations between pairs of natural language sentences In this study we first modify this model for all three cQA tasks We also extend this framework into a jointly trained model when the external resources are available ie selecting an external comment when we know the question that the external comment answers Task C Our ultimate objective is to classify relevant questions and comments without complicated handcrafted features By applying RNNbased encoders we avoid heavily engineered features and learn the representation automatically In addition an attention mechanism augments encoders with the ability to attend to past outputs directly This becomes helpful when encoding longer sequences since we no longer need to compress all information into a fixedlength vector representation In our view existing annotated cQA corpora are generally too small to properly train an endtoend neural network To address this we investigate transfer learning by pretraining the recurrent systems on other corpora and also generating additional instances from existing cQA corpus Earlier work of community question answering relied heavily on feature engineering linguistic tools and external resource BIBREF3 and BIBREF4 utilized rich nontextual features such as answers profile BIBREF5 syntactically analyzed the question and extracted name entity features BIBREF6 demonstrated a textual entailment system can enhance cQA task by casting question answering to logical entailment More recent work incorporated word vector into their feature extraction system and based on it designed different distance metric for question and answer BIBREF7 BIBREF8 While these approaches showed effectiveness it is difficult to generalize them to common cQA tasks since linguistic tools and external resource may be restrictive in other languages and features are highly customized for each cQA task Very recent work on answer selection also involved the use of neural networks BIBREF9 used LSTM to construct a joint vector based on both the question and the answer and then converted it into a learning to rank problem BIBREF10 proposed several convolutional neural network CNN architectures for cQA Our method differs in that RNN encoder is applied here and by adding attention mechanism we jointly learn which words in question to focus and hence available to conduct qualitative analysis During classification we feed the extracted vector into a feedforward neural network directly instead of using meanmax pooling on top of each time steps In this section we first discuss long shortterm memory LSTM units and an associated attention mechanism Next we explain how we can encode a pair of sentences into a dense vector for predicting relationships using an LSTM with an attention mechanism Finally we apply these models to predict questionquestion similarity questioncomment similarity and questionexternal comment similarity LSTMs have shown great success in many different fields An LSTM unit contains a memory cell with selfconnections as well as three multiplicative gates to control information flow Given input vector xt previous hidden outputs ht1 and previous cell state ct1 LSTM units operate as follows X beginbmatrix
xt03em
ht103em
endbmatrix