text
stringlengths
0
164k
<s>[INST] Minimally Supervised Learning of Affective Events Using Discourse Relations [/INST] Affective events BIBREF0 are events that typically affect people in positive or negative ways For example getting money and playing sports are usually positive to the experiencers catching cold and losing ones wallet are negative Understanding affective events is important to various natural language processing NLP applications such as dialogue systems BIBREF1 questionanswering systems BIBREF2 and humor recognition BIBREF3 In this paper we work on recognizing the polarity of an affective event that is represented by a score ranging from 1 negative to 1 positive Learning affective events is challenging because as the examples above suggest the polarity of an event is not necessarily predictable from its constituent words Combined with the unbounded combinatorial nature of language the noncompositionality of affective polarity entails the need for large amounts of world knowledge which can hardly be learned from small annotated data In this paper we propose a simple and effective method for learning affective events that only requires a very small seed lexicon and a large raw corpus As illustrated in Figure FIGREF1 our key idea is that we can exploit discourse relations BIBREF4 to efficiently propagate polarity from seed predicates that directly report ones emotions eg to be glad is positive Suppose that events x1 are x2 are in the discourse relation of Cause ie x1 causes x2 If the seed lexicon suggests x2 is positive x1 is also likely to be positive because it triggers the positive emotion The fact that x2 is known to be negative indicates the negative polarity of x1 Similarly if x1 and x2 are in the discourse relation of Concession ie x2 in spite of x1 the reverse of x2s polarity can be propagated to x1 Even if x2s polarity is not known in advance we can exploit the tendency of x1 and x2 to be of the same polarity for Cause or of the reverse polarity for Concession although the heuristic is not exempt from counterexamples We transform this idea into objective functions and train neural network models that predict the polarity of a given event We trained the models using a Japanese web corpus Given the minimum amount of supervision they performed well In addition the combination of annotated and unannotated data yielded a gain over a purely supervised baseline when labeled data were small Learning affective events is closely related to sentiment analysis Whereas sentiment analysis usually focuses on the polarity of what are described eg movies we work on how people are typically affected by events In sentiment analysis much attention has been paid to compositionality Wordlevel polarity BIBREF5 BIBREF6 BIBREF7 and the roles of negation and intensification BIBREF8 BIBREF6 BIBREF9 are among the most important topics In contrast we are more interested in recognizing the sentiment polarity of an event that pertains to commonsense knowledge eg getting money and catching cold Label propagation from seed instances is a common approach to inducing sentiment polarities While BIBREF5 and BIBREF10 worked on word and phraselevel polarities BIBREF0 dealt with eventlevel polarities BIBREF5 and BIBREF10 linked instances using cooccurrence information andor phraselevel coordinations eg A and B and A but B We shift our scope to event pairs that are more complex than phrase pairs and consequently exploit discourse connectives as eventlevel counterparts of phraselevel conjunctions BIBREF0 constructed a network of events using word embeddingderived similarities Compared with this method our discourse relationbased linking of events is much simpler and more intuitive Some previous studies made use of document structure to understand the sentiment BIBREF11 proposed a sentimentspecific pretraining strategy using unlabeled dialog data tweetreply pairs BIBREF12 proposed a method of building a polaritytagged corpus ACP Corpus They automatically gathered sentences that had positive or negative opinions utilizing HTML layout structures in addition to linguistic patterns Our method depends only on raw texts and thus has wider applicability Our goal is to learn the polarity function px which predicts the sentiment polarity score of an event x We approximate px by a neural network with the following form rm Encoder outputs a vector representation of the event x rm Linear is a fullyconnected layer and transforms the representation into a scalar rm tanh is the hyperbolic tangent and transforms the scalar into a score ranging from 1 to 1 In Section SECREF21 we consider two specific implementations of rm Encoder Our method requires a very small seed lexicon and a large raw corpus We assume that we can automatically extract discoursetagged event pairs xi1 xi2 i1 cdots from the raw corpus We refer to xi1 and xi2 as former and latter events respectively As shown in Figure FIGREF1 we limit our scope to two discourse relations Cause and Concession The seed lexicon consists of positive and negative predicates If the predicate of an extracted event is in the seed lexicon and does not involve complex phenomena like negation we assign the corresponding polarity score 1 for positive events and 1 for negative events to the event We expect the model to automatically learn complex phenomena through label propagation Based on the availability of scores and the types of discourse relations we classify the extracted event pairs into the following three types The seed lexicon matches 1 the latter event but 2 not the former event and 3 their discourse relation type is Cause or Concession If the discourse relation type is Cause the former event is given the same score as the latter Likewise if the discourse relation type is Concession the former event is given the opposite of the latters score They are used as reference scores during training The seed lexicon matches neither the former nor the latter event and their discourse relation type is Cause We assume the two events have the same polarities The seed lexicon matches neither the former nor the latter event and their discourse relation type is Concession We assume the two events have the reversed polarities Using AL CA and CO data we optimize the parameters of the polarity function px We define a loss function for each of the three types of event pairs and sum up the multiple loss functions We use mean squared error to construct loss functions For the AL data the loss function is defined as where xi1 and xi2 are the ith pair of the AL data ri1 and ri2 are the automaticallyassigned scores of xi1 and xi2 respectively Nrm AL is the total number of AL pairs and lambda rm AL is a hyperparameter For the CA data the loss function is defined as yi1 and yi2 are the ith pair of the CA pairs Nrm CA is the total number of CA pairs lambda rm CA and mu are hyperparameters The first term makes the scores of the two events closer while the second term prevents the scores from shrinking to zero The loss function for the CO data is defined analogously The difference is that the first term makes the scores of the two events distant from each other As a raw corpus we used a Japanese web corpus that was compiled through the procedures proposed by BIBREF13 To extract event pairs tagged with discourse relations we used the Japanese dependency parser KNP and inhouse postprocessing scripts BIBREF14 KNP used handwritten rules to segment each sentence into what we conventionally called clauses mostly consecutive text chunks each of which contained one main predicate KNP also identified the discourse relations of event pairs if explicit discourse connectives BIBREF4 such as because and in spite of were present We treated CauseReason and Condition in the original tagset BIBREF15 as Cause and Concession as Concession respectively Here is an example of event pair extraction Because I made a serious mistake I got fired From this sentence we extracted the event pair of I make a serious mistake and I get fired and tagged it with Cause We constructed our seed lexicon consisting of 15 positive words and 15 negative words as shown in Section SECREF27 From the corpus of about 100 million sentences we obtained 14 millions event pairs for AL 41 millions for CA and 6 millions for CO We randomly selected subsets of AL event pairs such that positive and negative latter events were equal in size We also sampled event pairs for each of CA and CO such that it was five times larger than AL The results are shown in Table TABREF16 We used the latest version of the ACP Corpus BIBREF12 for evaluation It was used for semisupervised training as well Extracted from Japanese websites using HTML layouts and linguistic patterns the dataset covered various genres For example the following two sentences were labeled positive and negative respectively The work is easy There is no parking lot Although the ACP corpus was originally constructed in the context of sentiment analysis we found that it could roughly be regarded as a collection of affective events We parsed each sentence and extracted the last clause in it The traindevtest split of the data is shown in Table TABREF19 The objective function for supervised training is where vi is the ith event Ri is the reference score of vi and Nrm ACP is the number of the events of the ACP Corpus To optimize the hyperparameters we used the dev set of the ACP Corpus For the evaluation we used the test set of the ACP Corpus The model output was classified as positive if px 0 and negative if px le 0 As for rm Encoder we compared two types of neural networks BiGRU and BERT GRU BIBREF16 is a recurrent neural network sequence encoder BiGRU reads an input sequence forward and backward and the output is the concatenation of the final forward and backward hidden states BERT BIBREF17 is a pretrained multilayer bidirectional Transformer BIBREF18 encoder Its output is the final hidden state corresponding to the special classification tag CLS For the details of rm Encoder see Sections SECREF30 We trained the model with the following four combinations of the datasets AL ALCACO two proposed models ACP supervised and ACPALCACO semisupervised The corresponding objective functions were mathcal Lrm AL mathcal Lrm AL mathcal Lrm CA mathcal Lrm CO mathcal Lrm ACP and mathcal Lrm ACP mathcal Lrm AL mathcal Lrm CA mathcal Lrm CO Table TABREF23 shows accuracy As the Random baseline suggests positive and negative labels were distributed evenly The RandomSeed baseline made use of the seed lexicon and output the corresponding label or the reverse of it for negation if the events predicate is in the seed lexicon We can see that the seed lexicon itself had practically no impact on prediction The models in the top block performed considerably better than the random baselines The performance gaps with their semisupervised counterparts shown in the middle block were less than 7 This demonstrates the effectiveness of discourse relationbased label propagation Comparing the model variants we obtained the highest score with the BiGRU encoder trained with the ALCACO dataset BERT was competitive but its performance went down if CA and CO were used in addition to AL We conjecture that BERT was more sensitive to noises found more frequently in CA and CO Contrary to our expectations supervised models ACP outperformed semisupervised models ACPALCACO This suggests that the training set of 06 million events is sufficiently large for training the models For comparison we trained the models with a subset 6000 events of the ACP dataset As the results shown in Table TABREF24 demonstrate our method is effective when labeled data are small The result of hyperparameter optimization for the BiGRU encoder was as follows As the CA and CO pairs were equal in size Table TABREF16 lambda rm CA and lambda rm CO were comparable values lambda rm CA was about onethird of lambda rm CO and this indicated that the CA pairs were noisier than the CO pairs A major type of CA pairs that violates our assumption was in the form of textit problemtextnegative causes textit solutiontextpositive there is a bad point I try to improve it The polarities of the two events were reversed in spite of the Cause relation and this lowered the value of lambda rm CA Some examples of model outputs are shown in Table TABREF26 The first two examples suggest that our model successfully learned negation without explicit supervision Similarly the next two examples differ only in voice but the model correctly recognized that they had opposite polarities The last two examples share the predicate drop and only the objects are different The second event lit drop ones shoulders is an idiom that expresses a disappointed feeling The examples demonstrate that our model correctly learned noncompositional expressions In this paper we proposed to use discourse relations to effectively propagate polarities of affective events from seeds Experiments show that even with a minimal amount of supervision the proposed method performed well Although event pairs linked by discourse analysis are shown to be useful they nevertheless contain noises Adding linguisticallymotivated filtering rules would help improve the performance We thank Nobuhiro Kaji for providing the ACP Corpus and Hirokazu Kiyomaru and Yudai Kishimoto for their help in extracting event pairs This work was partially supported by Yahoo Japan Corporation rejoice be glad be pleasant be happy be impressed be excited feel nostalgic like respect be relieved admire be calm be satisfied be healed and be refreshed get angry be sad be lonely be scared feel anxious be embarrassed hate feel down be bored feel hopeless have a hard time have trouble be depressed be worried and be sorry The dimension of the embedding layer was 256 The embedding layer was initialized with the word embeddings pretrained using the Web corpus The input sentences were segmented into words by the morphological analyzer Juman The vocabulary size was 100000 The number of hidden layers was 2 The dimension of hidden units was 256 The optimizer was Momentum SGD BIBREF21 The minibatch size was 1024 We ran 100 epochs and selected the snapshot that achieved the highest score for the dev set We used a Japanese BERT model pretrained with Japanese Wikipedia The input sentences were segmented into words by Juman and words were broken into subwords by applying BPE BIBREF20 The vocabulary size was 32000 The maximum length of an input sequence was 128 The number of hidden layers was 12 The dimension of hidden units was 768 The number of selfattention heads was 12 The optimizer was Adam BIBREF19 The minibatch size was 32 We ran 1 epoch </s>
<s>[INST] PO-EMO: Conceptualization, Annotation, and Modeling of Aesthetic Emotions in German and English Poetry [/INST] 11em 111em 1111em Thomas Haider13 Steffen Eger2 Evgeny Kim3 Roman Klinger3 Winfried Menninghaus1 1Department of Language and Literature Max Planck Institute for Empirical Aesthetics 2NLLG Department of Computer Science Technische Universitat Darmstadt 3Institut fr Maschinelle Sprachverarbeitung University of Stuttgart thomashaider wmaempgde egeraiphestudarmstadtde romanklinger evgenykimimsunistuttgartde Most approaches to emotion analysis regarding social media literature news and other domains focus exclusively on basic emotion categories as defined by Ekman or Plutchik However art such as literature enables engagement in a broader range of more complex and subtle emotions that have been shown to also include mixed emotional responses We consider emotions as they are elicited in the reader rather than what is expressed in the text or intended by the author Thus we conceptualize a set of aesthetic emotions that are predictive of aesthetic appreciation in the reader and allow the annotation of multiple labels per line to capture mixed emotions within context We evaluate this novel setting in an annotation experiment both with carefully trained experts and via crowdsourcing Our annotation with experts leads to an acceptable agreement of kappa 70 resulting in a consistent dataset for future large scale analysis Finally we conduct first emotion classification experiments based on BERT showing that identifying aesthetic emotions is challenging in our data with up to 52 F1micro on the German subset Data and resources are available at httpsgithubcomtnhaiderpoetryemotion Emotion Aesthetic Emotions Literature Poetry Annotation Corpora Emotion Recognition MultiLabel Emotions are central to human experience creativity and behavior Models of affect and emotion both in psychology and natural language processing commonly operate on predefined categories designated either by continuous scales of eg Valence Arousal and Dominance BIBREF0 or discrete emotion labels which can also vary in intensity Discrete sets of emotions often have been motivated by theories of basic emotions as proposed by Ekman1992Anger Fear Joy Disgust Surprise Sadnessand Plutchik1991 who added Trust and Anticipation These categories are likely to have evolved as they motivate behavior that is directly relevant for survival However art reception typically presupposes a situation of safety and therefore offers special opportunities to engage in a broader range of more complex and subtle emotions These differences between reallife and art contexts have not been considered in natural language processing work so far To emotionally move readers is considered a prime goal of literature since Latin antiquity BIBREF1 BIBREF2 BIBREF3 Deeply moved readers shed tears or get chills and goosebumps even in lab settings BIBREF4 In cases like these the emotional response actually implies an aesthetic evaluation narratives that have the capacity to move readers are evaluated as good and powerful texts for this very reason Similarly feelings of suspense experienced in narratives not only respond to the trajectory of the plots content but are also directly predictive of aesthetic liking or disliking Emotions that exhibit this dual capacity have been defined as aesthetic emotions BIBREF2 Contrary to the negativity bias of classical emotion catalogues emotion terms used for aesthetic evaluation purposes include far more positive than negative emotions At the same time many overall positive aesthetic emotions encompass negative or mixed emotional ingredients BIBREF2 eg feelings of suspense include both hopeful and fearful anticipations For these reasons we argue that the analysis of literature with a focus on poetry should rely on specifically selected emotion items rather than on the narrow range of basic emotions only Our selection is based on previous research on this issue in psychological studies on art reception and specifically on poetry For instance knoop2016mapping found that Beauty is a major factor in poetry reception We primarily adopt and adapt emotion terms that schindler2017measuring have identified as aesthetic emotions in their study on how to measure and categorize such particular affective states Further we consider the aspect that when selecting specific emotion labels the perspective of annotators plays a major role Whether emotions are elicited in the reader expressed in the text or intended by the author largely changes the permissible labels For example feelings of Disgust or Love might be intended or expressed in the text but the text might still fail to elicit corresponding feelings as these concepts presume a strong reaction in the reader Our focus here was on the actual emotional experience of the readers rather than on hypothetical intentions of authors We opted for this reader perspective based on previous research in NLP BIBREF5 BIBREF6 and work in empirical aesthetics BIBREF7 that specifically measured the reception of poetry Our final set of emotion labels consists of BeautyJoy Sadness Uneasiness Vitality Suspense AweSublime Humor Annoyance and Nostalgia In addition to selecting an adapted set of emotions the annotation of poetry brings further challenges one of which is the choice of the appropriate unit of annotation Previous work considers words BIBREF8 BIBREF9 sentences BIBREF10 BIBREF11 utterances BIBREF12 sentence triples BIBREF13 or paragraphs BIBREF14 as the units of annotation For poetry reasonable units follow the logical document structure of poems ie verse line stanza and owing to its relative shortness the complete text The more coarsegrained the unit the more difficult the annotation is likely to be but the more it may also enable the annotation of emotions in context We find that annotating finegrained units lines that are hierarchically ordered within a larger context stanza poem caters to the specific structure of poems where emotions are regularly mixed and are more interpretable within the whole poem Consequently we allow the mixing of emotions already at line level through multilabel annotation The remainder of this paper includes 1 a report of the annotation process that takes these challenges into consideration 2 a description of our annotated corpora and 3 an implementation of baseline models for the novel task of aesthetic emotion annotation in poetry In a first study the annotators work on the annotations in a closely supervised fashion carefully reading each verse stanza and poem In a second study the annotations are performed via crowdsourcing within relatively short time periods with annotators not seeing the entire poem while reading the stanza Using these two settings we aim at obtaining a better understanding of the advantages and disadvantages of an expert vs crowdsourcing setting in this novel annotation task Particularly we are interested in estimating the potential of a crowdsourcing environment for the task of selfperceived emotion annotation in poetry given time and cost overhead associated with inhouse annotation process that usually involve training and close supervision of the annotators We provide the final datasets of German and English language poems annotated with reader emotions on verse level at httpsgithubcomtnhaiderpoetryemotion Natural language understanding research on poetry has investigated stylistic variation BIBREF15 BIBREF16 BIBREF17 with a focus on broadly accepted formal features such as meter BIBREF18 BIBREF19 BIBREF20 and rhyme BIBREF21 BIBREF22 as well as enjambement BIBREF23 BIBREF24 and metaphor BIBREF25 BIBREF26 Recent work has also explored the relationship of poetry and prose mainly on a syntactic level BIBREF27 BIBREF28 Furthermore poetry also lends itself well to semantic change analysis BIBREF29 BIBREF30 as linguistic invention BIBREF31 BIBREF32 and succinctness BIBREF33 are at the core of poetic production Corpusbased analysis of emotions in poetry has been considered but there is no work on German and little on English kao2015computational analyze English poems with word associations from the Harvard Inquirer and LIWC within the categories positivenegative outlook positivenegative emotion and physpsych wellbeing houfrank2015analyzing examine the binary sentiment polarity of Chinese poems with a weighted personalized PageRank algorithm barros2013automatic followed a tagging approach with a thesaurus to annotate words that are similar to the words Joy Anger Fear and Sadness moreover translating these from English to Spanish With these word lists they distinguish the categories Love Songs to Lisi Satire and PhilosophicalMoralReligious in Quevedos poetry Similarly alsharif2013emotion classify unique Arabic emotional text forms based on word unigrams Mohanty2018 create a corpus of 788 poems in the Indian Odia language annotate it on text poem level with binary negative and positive sentiment and are able to distinguish these with moderate success Sreeja2019 construct a corpus of 736 Indian language poems and annotate the texts on Ekmans six categories Love Courage They achieve a Fleiss Kappa of 48 In contrast to our work these studies focus on basic emotions and binary sentiment polarity only rather than addressing aesthetic emotions Moreover they annotate on the level of complete poems instead of finegrained verse and stanzalevel Emotion corpora have been created for different tasks and with different annotation strategies with different units of analysis and different foci of emotion perspective reader writer text Examples include the ISEAR dataset BIBREF34 documentlevel emotion annotation in children stories BIBREF10 and news headlines BIBREF35 sentencelevel and finegrained emotion annotation in literature by Kim2018 phrase and wordlevel We refer the interested reader to an overview paper on existing corpora BIBREF36 We are only aware of a limited number of publications which look in more depth into the emotion perspective buechelhahn2017emobank report on an annotation study that focuses both on writers and readers emotions associated with English sentences The results show that the reader perspective yields better interannotator agreement Yang2009 also study the difference between writer and reader emotions but not with a modeling perspective The authors find that positive reader emotions tend to be linked to positive writer emotions in online blogs The task of emotion classification has been tackled before using rulebased and machine learning approaches Rulebased emotion classification typically relies on lexical resources of emotionally charged words BIBREF9 BIBREF37 BIBREF8 and offers a straightforward and transparent way to detect emotions in text In contrast to rulebased approaches current models for emotion classification are often based on neural networks and commonly use word embeddings as features Schuff2017 applied models from the classes of CNN BiLSTM and LSTM and compare them to linear classifiers SVM and MaxEnt where the BiLSTM shows best results with the most balanced precision and recall AbdulMageed2017 claim the highest F1 with gated recurrent unit networks BIBREF38 for Plutchiks emotion model More recently shared tasks on emotion analysis BIBREF39 BIBREF40 triggered a set of more advanced deep learning approaches including BERT BIBREF41 and other transfer learning methods BIBREF42 For our annotation and modeling studies we build on top of two poetry corpora in English and German which we refer to as POEMO This collection represents important contributions to the literary canon over the last 400 years We make this resource available in TEI P5 XML and an easytouse tab separated format Table TABREF9 shows a size overview of these data sets Figure FIGREF8 shows the distribution of our data over time via density plots Note that both corpora show a relative underrepresentation before the onset of the romantic period around 1750 The German corpus contains poems available from the website lyrikantikoerperchende ANTIK which provides a platform for students to upload essays about poems The data is available in the Hypertext Markup Language with clean line and stanza segmentation ANTIK also has extensive metadata including author names years of publication numbers of sentences poetic genres and literary periods that enable us to gauge the distribution of poems according to periods The 158 poems we consider 731 stanzas are dispersed over 51 authors and the New High German timeline 15751936 AD This data has been annotated besides emotions for meter rhythm and rhyme in other studies BIBREF22 BIBREF43 The English corpus contains 64 poems of popular English writers It was partly collected from Project Gutenberg with the GutenTag tool and in addition includes a number of hand selected poems from the modern period and represents a cross section of popular English poets We took care to include a number of female authors who would have been underrepresented in a uniform sample Time stamps in the corpus are organized by the birth year of the author as assigned in Project Gutenberg In the following we will explain how we compiled and annotated three data subsets namely 1 48 German poems with gold annotation These were originally annotated by three annotators The labels were then aggregated with majority voting and based on discussions among the annotators Finally they were curated to only include one gold annotation 2 The remaining 110 German poems that are used to compute the agreement in table TABREF20 and 3 64 English poems contain the raw annotation from two annotators We report the genesis of our annotation guidelines including the emotion classes With the intention to provide a language resource for the computational analysis of emotion in poetry we aimed at maximizing the consistency of our annotation while doing justice to the diversity of poetry We iteratively improved the guidelines and the annotation workflow by annotating in batches cleaning the class set and the compilation of a gold standard The final overall cost of producing this expert annotated dataset amounts to approximately 3500 The annotation process was initially conducted by three female university students majoring in linguistics andor literary studies which we refer to as our expert annotators We used the INCePTION platform for annotation BIBREF44 Starting with the German poems we annotated in batches of about 16 and later in some cases 32 poems After each batch we computed agreement statistics including heatmaps and provided this feedback to the annotators For the first three batches the three annotators produced a gold standard using a majority vote for each line Where this was inconclusive they developed an adjudicated annotation based on discussion Where necessary we encouraged the annotators to aim for more consistency as most of the frequent switching of emotions within a stanza could not be reconstructed or justified In poems emotions are regularly mixed already on line level and are more interpretable within the whole poem We therefore annotate lines hierarchically within the larger context of stanzas and the whole poem Hence we instruct the annotators to read a complete stanza or full poem and then annotate each line in the context of its stanza To reflect on the emotional complexity of poetry we allow a maximum of two labels per line while avoiding heavy label fluctuations by encouraging annotators to reflect on their feelings to avoid empty annotations Rather they were advised to use fewer labels and more consistent annotation This additional constraint is necessary to avoid wild nonreconstructable or nonjustified annotations All subsequent batches all except the first three were only annotated by two out of the three initial annotators coincidentally those two who had the lowest initial agreement with each other We asked these two experts to use the generated gold standard 48 poems majority votes of 3 annotators plus manual curation as a reference if in doubt annotate according to the gold standard This eliminated some systematic differences between them and markedly improved the agreement levels roughly from 0305 Cohens kappa in the first three batches to around 0608 kappa for all subsequent batches This annotation procedure relaxes the reader perspective as we encourage annotators if in doubt to annotate how they think the other annotators would annotate However we found that this formulation improves the usability of the data and leads to a more consistent annotation We opt for measuring the reader perspective rather than the text surface or authors intent To closer define and support conceptualizing our labels we use particular items as they are used in psychological selfevaluations These items consist of adjectives verbs or short phrases We build on top of schindler2017measuring who proposed 43 items that were then grouped by a factor analysis based on selfevaluations of participants The resulting factors are shown in Table TABREF17 We attempt to cover all identified factors and supplement with basic emotions BIBREF46 BIBREF47 where possible We started with a larger set of labels to then delete and substitute tone down labels during the initial annotation process to avoid infrequent classes and inconsistencies Further we conflate labels if they show considerable confusion with each other These iterative improvements particularly affected Confusion Boredom and Other that were very infrequently annotated and had little agreement among annotators kappa 2 For German we also removed Nostalgia kappa 218 after gold standard creation but after consideration added it back for English then achieving agreement Nostalgia is still available in the gold standard then with a second label BeautyJoy or Sadness to keep consistency However Confusion Boredom and Other are not available in any subcorpus Our final set consists of nine classes ie in order of frequency BeautyJoy Sadness Uneasiness Vitality Suspense AweSublime Humor Annoyance and Nostalgia In the following we describe the labels and give further details on the aggregation process Annoyance annoys meangers mefelt frustrated Annoyance implies feeling annoyed frustrated or even angry while reading the linestanza We include the class Anger here as this was found to be too strong in intensity AweSublime found it overwhelmingsense of greatness AweSublime implies being overwhelmed by the linestanza ie if one gets the impression of facing something sublime or if the linestanza inspires one with awe or that the expression itself is sublime Such emotions are often associated with subjects like god death life truth etc The term Sublime originated with kant2000critique as one of the first aesthetic emotion terms Awe is a more common English term BeautyJoy found it beautifulpleasingmakes me happyjoyful kant2000critique already spoke of a feeling of beauty and it should be noted that it is not a merely pleasing emotion Therefore in our pilot annotations Beauty and Joy were separate labels However schindler2017measuring found that items for Beauty and Joy load into the same factors Furthermore our pilot annotations revealed while Beauty is the more dominant and frequent feeling both labels regularly accompany each other and they often get confused across annotators Therefore we add Joy to form an inclusive label BeautyJoy that increases annotation consistency Humor found it funnyamusing Implies feeling amused by the linestanza or if it makes one laugh Nostalgia makes me nostalgic Nostalgia is defined as a sentimental longing for things persons or situations in the past It often carries both positive and negative feelings However since this label is quite infrequent and not available in all subsets of the data we annotated it with an additional BeautyJoy or Sadness label to ensure annotation consistency Sadness makes me sadtouches me If the linestanza makes one feel sad It also includes a more general being touched moved Suspense found it grippingsparked my interest Choose Suspense if the linestanza keeps one in suspense if the linestanza excites one or triggers ones curiosity We further removed Anticipation from SuspenseAnticipation as Anticipation appeared to us as being a more cognitive prediction whereas Suspense is a far more straightforward emotion item Uneasiness found it uglyunsettlingdisturbing frighteningdistasteful This label covers situations when one feels discomfort about the linestanza if the linestanza feels distastefulugly unsettlingdisturbing or frightens one The labels Ugliness and Disgust were conflated into Uneasiness as both are seldom felt in poetry being inadequatetoo stronghigh in arousal and typically lead to Uneasiness Vitality found it invigoratingspurs me oninspires me This label is meant for a linestanza that has an inciting encouraging effect if the linestanza conveys a feeling of movement energy and vitality which animates to action Similar terms are Activation and Stimulation Table TABREF20 shows the Cohens kappa agreement scores among our two expert annotators for each emotion category e as follows We assign each instance a line in a poem a binary label indicating whether or not the annotator has annotated the emotion category e in question From this we obtain vectors vie for annotators i01 where each entry of vie holds the binary value for the corresponding line We then apply the kappa statistics to the two binary vectors vie Additionally to averaged kappa we report microF1 values in Table TABREF21 between the multilabel annotations of both expert annotators as well as the microF1 score of a random baseline as well as of the majority emotion baseline which labels each line as BeautyJoy We find that Cohen kappa agreement ranges from 84 for Uneasiness in the English data 81 for Humor and Nostalgia down to German Suspense 65 AweSublime 61 and Vitality for both languages 50 English 63 German Both annotators have a similar emotion frequency profile where the ranking is almost identical especially for German However for English Annotator 2 annotates more Vitality than Uneasiness Figure FIGREF18 shows the confusion matrices of labels between annotators as heatmaps Notably BeautyJoy and Sadness are confused across annotators more often than other labels This is topical for poetry and therefore not surprising One might argue that the beauty of beings and situations is only beautiful because it is not enduring and therefore not to divorce from the sadness of the vanishing of beauty BIBREF48 We also find considerable confusion of Sadness with AweSublime and Vitality while the latter is also regularly confused with BeautyJoy Furthermore as shown in Figure FIGREF23 we find that no single poem aggregates to more than six emotion labels while no stanza aggregates to more than four emotion labels However most lines and stanzas prefer one or two labels German poems seem more emotionally diverse where more poems have three labels than two labels while the majority of English poems have only two labels This is however attributable to the generally shorter English texts After concluding the expert annotation we performed a focused crowdsourcing experiment based on the final label set and items as they are listed in Table TABREF27 and Section SECREF19 With this experiment we aim to understand whether it is possible to collect reliable judgements for aesthetic perception of poetry from a crowdsourcing platform A second goal is to see whether we can replicate the expensive expert annotations with less costly crowd annotations We opted for a maximally simple annotation environment where we asked participants to annotate English 4line stanzas with selfperceived reader emotions We choose English due to the higher availability of English language annotators on crowdsourcing platforms Each annotator rates each stanza independently of surrounding context For consistency and to simplify the task for the annotators we opt for a tradeoff between completeness and granularity of the annotation Specifically we subselect stanzas composed of four verses from the corpus of 64 hand selected English poems The resulting selection of 59 stanzas is uploaded to Figure Eight for annotation The annotators are asked to answer the following questions for each instance Question 1 singlechoice Read the following stanza and decide for yourself which emotions it evokes Question 2 multiplechoice Which additional emotions does the stanza evoke The answers to both questions correspond to the emotion labels we defined to use in our annotation as described in Section SECREF19 We add an additional answer choice None to Question 2 to allow annotators to say that a stanza does not evoke any additional emotions Each instance is annotated by ten people We restrict the task geographically to the United Kingdom and Ireland and set the internal parameters on Figure Eight to only include the highest quality annotators to join the task We pay 009 per instance The final cost of the crowdsourcing experiment is 74 In the following we determine the best aggregation strategy regarding the 10 annotators with bootstrap resampling For instance one could assign the label of a specific emotion to an instance if just one annotators picks it or one could assign the label only if all annotators agree on this emotion To evaluate this we repeatedly pick two sets of 5 annotators each out of the 10 annotators for each of the 59 stanzas 1000 times overall ie 1000times 59 times bootstrap resampling For each of these repetitions we compare the agreement of these two groups of 5 annotators Each group gets assigned with an adjudicated emotion which is accepted if at least one annotator picks it at least two annotators pick it etc up to all five pick it We show the results in Table TABREF27 The kappa scores show the average agreement between the two groups of five annotators when the adjudicated class is picked based on the particular threshold of annotators with the same label choice We see that some emotions tend to have higher agreement scores than others namely Annoyance 66 Sadness up to 52 and AweSublime BeautyJoy Humor all 46 The maximum agreement is reached mostly with a threshold of 2 4 times or 3 3 times We further show in the same table the average numbers of labels from each strategy Obviously a lower threshold leads to higher numbers corresponding to a disjunction of annotations for each emotion The drop in label counts is comparably drastic with on average 18 labels per class Overall the best average kappa agreement 32 is less than half of what we saw for the expert annotators roughly 70 Crowds especially disagree on many more intricate emotion labels Uneasiness Vitality Nostalgia Suspense We visualize how often two emotions are used to label an instance in a confusion table in Figure FIGREF18 Sadness is used most often to annotate a stanza and it is often confused with Suspense Uneasiness and Nostalgia Further BeautyJoy partially overlaps with AweSublime Nostalgia and Sadness On average each crowd annotator uses two emotion labels per stanza 56 of cases only in 36 of the cases the annotators use one label and in 6 and 1 of the cases three and four labels respectively This contrasts with the expert annotators who use one label in about 70 of the cases and two labels in 30 of the cases for the same 59 fourliners Concerning frequency distribution for emotion labels both experts and crowds name Sadness and BeautyJoy as the most frequent emotions for the best threshold of 3 and Nostalgia as one of the least frequent emotions The Spearman rank correlation between experts and crowds is about 055 with respect to the label frequency distribution indicating that crowds could replace experts to a moderate degree when it comes to extracting eg emotion distributions for an author or time period Now we further compare crowds and experts in terms of whether crowds could replicate expert annotations also on a finer stanza level rather than only on a distributional level To gauge the quality of the crowd annotations in comparison with our experts we calculate agreement on the emotions between experts and an increasing group size from the crowd For each stanza instance s we pick N crowd workers where Nin lbrace 46810rbrace then pick their majority emotion for s and additionally pick their second ranked majority emotion if at least fracN21 workers have chosen it For the experts we aggregate their emotion labels on stanza level then perform the same strategy for selection of emotion labels Thus for s both crowds and experts have 1 or 2 emotions For each emotion we then compute Cohens kappa as before Note that compared to our previous experiments in Section SECREF26 with a threshold each stanza now receives an emotion annotation exactly one or two emotion labels both by the experts and the crowdworkers In Figure FIGREF30 we plot agreement between experts and crowds on stanza level as we vary the number N of crowd workers involved On average there is roughly a steady linear increase in agreement as N grows which may indicate that N20 or N30 would still lead to better agreement Concerning individual emotions Nostalgia is the emotion with the least agreement as opposed to Sadness in our sample of 59 fourliners the agreement for this emotion grows from 47 kappa with N4 to 65 kappa with N10 Sadness is also the most frequent emotion both according to experts and crowds Other emotions for which a reasonable agreement is achieved are Annoyance AweSublime BeautyJoy Humor kappa 02 Emotions with little agreement are Vitality Uneasiness Suspense Nostalgia kappa 02 By and large we note from Figure FIGREF18 that expert annotation is more restrictive with experts agreeing more often on particular emotion labels seen in the darker diagonal The results of the crowdsourcing experiment on the other hand are a mixed bag as evidenced by a much sparser distribution of emotion labels However we note that these differences can be caused by 1 the disparate training procedure for the experts and crowds and 2 the lack of opportunities for close supervision and ongoing training of the crowds as opposed to the inhouse expert annotators In general however we find that substituting experts with crowds is possible to a certain degree Even though the crowds labels look inconsistent at first view there appears to be a good signal in their aggregated annotations helping to approximate expert annotations to a certain degree The average kappa agreement with the experts we get from N10 crowd workers 024 is still considerably below the agreement among the experts 070 To estimate the difficulty of automatic classification of our data set we perform multilabel document classification of stanzas with BERT BIBREF41 For this experiment we aggregate all labels for a stanza and sort them by frequency both for the gold standard and the raw expert annotations As can be seen in Figure FIGREF23 a stanza bears a minimum of one and a maximum of four emotions Unfortunately the label Nostalgia is only available 16 times in the German data the gold standard as a second label as discussed in Section SECREF19 None of our models was able to learn this label for German Therefore we omit it leaving us with eight proper labels We use the code and the pretrained BERT models of Farm provided by deepsetai We test the multilingualuncased model Multiling the germanbasecased model Base the germandbmdzuncased model Dbmdz and we tune the Base model on 80k stanzas of the German Poetry Corpus DLK BIBREF30 for 2 epochs both on token masked words and sequence next line prediction Basetextsc Tuned We split the randomized German dataset so that each label is at least 10 times in the validation set 63 instances 113 labels and at least 10 times in the test set 56 instances 108 labels and leave the rest for training 617 instances 946 labels We train BERT for 10 epochs with a batch size of 8 optimize with entropy loss and report F1micro on the test set See Table TABREF36 for the results We find that the multilingual model cannot handle infrequent categories ie AweSublime Suspense and Humor However increasing the dataset with English data improves the results suggesting that the classification would largely benefit from more annotated data The best model overall is DBMDZ 520 showing a balanced response on both validation and test set See Table TABREF37 for a breakdown of all emotions as predicted by the this model Precision is mostly higher than recall The labels AweSublime Suspense and Humor are harder to predict than the other labels The BASE and BASEtextsc TUNED models perform slightly worse than DBMDZ The effect of tuning of the BASE model is questionable probably because of the restricted vocabulary 30k We found that tuning on poetry does not show obvious improvements Lastly we find that models that were trained on lines instead of stanzas do not achieve the same F1 42 for the German models In this paper we presented a dataset of German and English poetry annotated with reader response to reading poetry We argued that basic emotions as proposed by psychologists such as Ekman and Plutchik that are often used in emotion analysis from text are of little use for the annotation of poetry reception We instead conceptualized aesthetic emotion labels and showed that a closely supervised annotation task results in substantial agreementin terms of kappa scoreon the final dataset The task of collecting readerperceived emotion response to poetry in a crowdsourcing setting is not straightforward In contrast to expert annotators who were closely supervised and reflected upon the task the annotators on crowdsourcing platforms are difficult to control and may lack necessary background knowledge to perform the task at hand However using a larger number of crowd annotators may lead to finding an aggregation strategy with a better tradeoff between quality and quantity of adjudicated labels For future work we thus propose to repeat the experiment with larger number of crowdworkers and develop an improved training strategy that would suit the crowdsourcing environment The dataset presented in this paper can be of use for different application scenarios including multilabel emotion classification styleconditioned poetry generation investigating the influence of rhythmprosodic features on emotion or analysis of authors genres and diachronic variation eg how emotions are represented differently in certain periods Further though our modeling experiments are still rudimentary we propose that this data set can be used to investigate the intrapoem relations either through multitask learning BIBREF49 andor with the help of hierarchical sequence classification approaches A special thanks goes to Gesine Fuhrmann who created the guidelines and tirelessly documented the annotation progress Also thanks to Annika Palm and Debby Trzeciak who annotated and gave lively feedback For help with the conceptualization of labels we thank Ines Schindler This research has been partially conducted within the CRETA center httpwwwcretaunistuttgartde which is funded by the German Ministry for Education and Research BMBF and partially funded by the German Research Council DFG projects SEAT Structured MultiDomain Emotion Analysis from Text KL 286911 This work has also been supported by the German Research Foundation as part of the Research Training Group Adaptive Preparation of Information from Heterogeneous Sources AIPHES at the Technische Universitt Darmstadt under grant No GRK 19941 We illustrate two examples of our German gold standard annotation a poem each by Friedrich Hlderlin and Georg Trakl and an English poem by Walt Whitman Hlderlins text stands out because the mood changes starkly from the first stanza to the second from BeautyJoy to Sadness Trakls text is a bit more complex with bits of Nostalgia and most importantly a mixture of Uneasiness with AweSublime Whitmans poem is an example of Vitality and its mixing with Sadness The English annotation was unified by us for space constraints For the full annotation please see httpsgithubcomtnhaiderpoetryemotion </s>
<s>[INST] Community Identity and User Engagement in a Multi-Community Landscape [/INST] If each city is like a game of chess the day when I have learned the rules I shall finally possess my empire even if I shall never succeed in knowing all the cities it contains Italo Calvino Invisible Cities A communitys identitydefined through the common interests and shared experiences of its usersshapes various facets of the social dynamics within it BIBREF0 BIBREF1 BIBREF2 Numerous instances of this interplay between a communitys identity and social dynamics have been extensively studied in the context of individual online communities BIBREF3 BIBREF4 BIBREF5 However the sheer variety of online platforms complicates the task of generalizing insights beyond these isolated singlecommunity glimpses A new way to reason about the variation across multiple communities is needed in order to systematically characterize the relationship between properties of a community and the dynamics taking place within One especially important component of community dynamics is user engagement We can aim to understand why users join certain communities BIBREF6 what factors influence user retention BIBREF7 and how users react to innovation BIBREF5 While striking patterns of user engagement have been uncovered in prior case studies of individual communities BIBREF8 BIBREF9 BIBREF10 BIBREF11 BIBREF12 we do not know whether these observations hold beyond these cases or when we can draw analogies between different communities Are there certain types of communities where we can expect similar or contrasting engagement patterns To address such questions quantitatively we need to provide structure to the diverse and complex space of online communities Organizing the multicommunity landscape would allow us to both characterize individual points within this space and reason about systematic variations in patterns of user engagement across the space Present work Structuring the multicommunity space In order to systematically understand the relationship between community identityand user engagement we introduce a quantitative typology of online communities Our typology is based on two key aspects of community identity how distinctiveor nichea communitys interests are relative to other communities and how dynamicor volatilethese interests are over time These axes aim to capture the salience of a communitys identity and dynamics of its temporal evolution Our main insight in implementing this typology automatically and at scale is that the language used within a community can simultaneously capture how distinctive and dynamic its interests are This languagebased approach draws on a wealth of literature characterizing linguistic variation in online communities and its relationship to community and user identity BIBREF16 BIBREF5 BIBREF17 BIBREF18 BIBREF19 Basing our typology on language is also convenient since it renders our framework immediately applicable to a wide variety of online communities where communication is primarily recorded in a textual format Using our framework we map almost 300 Reddit communities onto the landscape defined by the two axes of our typology Section SECREF2 We find that this mapping induces conceptually sound categorizations that effectively capture key aspects of communitylevel social dynamics In particular we quantitatively validate the effectiveness of our mapping by showing that our twodimensional typology encodes signals that are predictive of communitylevel rates of user retention complementing strong activitybased features Engagement and community identity We apply our framework to understand how two important aspects of user engagement in a communitythe communitys propensity to retain its users Section SECREF3 and its permeability to new members Section SECREF4 vary according to the type of collective identity it fosters We find that communities that are characterized by specialized constantlyupdating content have higher user retention rates but also exhibit larger linguistic gaps that separate newcomers from established members More closely examining factors that could contribute to this linguistic gap we find that especially within distinctive communities established users have an increased propensity to engage with the communitys specialized content compared to newcomers Section SECREF5 Interestingly while established members of distinctive communities more avidly respond to temporal updates than newcomers in more generic communities it is the outsiders who engage more with volatile content perhaps suggesting that such content may serve as an entrypoint to the community but not necessarily a reason to stay Such insights into the relation between collective identity and user engagement can be informative to community maintainers seeking to better understand growth patterns within their online communities More generally our methodology stands as an example of how sociological questions can be addressed in a multicommunity setting In performing our analyses across a rich variety of communities we reveal both the diversity of phenomena that can occur as well as the systematic nature of this diversity A communitys identity derives from its members common interests and shared experiences BIBREF15 BIBREF20 In this work we structure the multicommunity landscape along these two key dimensions of community identity how distinctive a communitys interests are and how dynamic the community is over time We now proceed to outline our quantitative typology which maps communities along these two dimensions We start by providing an intuition through inspecting a few example communities We then introduce a generalizable languagebased methodology and use it to map a large set of Reddit communities onto the landscape defined by our typology of community identity In order to illustrate the diversity within the multicommunity space and to provide an intuition for the underlying structure captured by the proposed typology we first examine a few example communities and draw attention to some key social dynamics that occur within them We consider four communities from Reddit in Seahawks fans of the Seahawks football team gather to discuss games and players in BabyBumps expecting mothers trade advice and updates on their pregnancy Cooking consists of recipe ideas and general discussion about cooking while in pics users share various images of random things like eels and hornets We note that these communities are topically contrasting and foster fairly disjoint user bases Additionally these communities exhibit varied patterns of user engagement While Seahawks maintains a devoted set of users from month to month pics is dominated by transient users who post a few times and then depart Discussions within these communities also span varied sets of interests Some of these interests are more specific to the community than others risotto for example is seldom a discussion point beyond Cooking Additionally some interests consistently recur while others are specific to a particular time kitchens are a consistent focus point for cooking but mint is only in season during spring Coupling specificity and consistency we find interests such as easter which isnt particularly specific to BabyBumps but gains prominence in that community around Easter see Figure FIGREF3 A for further examples These specific interests provide a window into the nature of the communities interests as a whole and by extension their community identities Overall discussions in Cooking focus on topics which are highly distinctive and consistently recur like risotto In contrast discussions in Seahawks are highly dynamic rapidly shifting over time as new games occur and players are traded in and out In the remainder of this section we formally introduce a methodology for mapping communities in this space defined by their distinctiveness and dynamicity examples in Figure FIGREF3 B Our approach follows the intuition that a distinctive community will use language that is particularly specific or unique to that community Similarly a dynamic community will use volatile language that rapidly changes across successive windows of time To capture this intuition automatically we start by defining wordlevel measures of specificity and volatility We then extend these wordlevel primitives to characterize entire comments and the community itself Our characterizations of words in a community are motivated by methodology from prior literature that compares the frequency of a word in a particular setting to its frequency in some background distribution in order to identify instances of linguistic variation BIBREF21 BIBREF19 Our particular framework makes this comparison by way of pointwise mutual information PMI In the following we use INLINEFORM0 to denote one community within a set INLINEFORM1 of communities and INLINEFORM2 to denote one time period within the entire history INLINEFORM3 of INLINEFORM4 We account for temporal as well as intercommunity variation by computing wordlevel measures for each time period of each communitys history INLINEFORM5 Given a word INLINEFORM6 used within a particular community INLINEFORM7 at time INLINEFORM8 we define two wordlevel measures Specificity We quantify the specificity INLINEFORM0 of INLINEFORM1 to INLINEFORM2 by calculating the PMI of INLINEFORM3 and INLINEFORM4 relative to INLINEFORM5 INLINEFORM6 where INLINEFORM0 is INLINEFORM1 s frequency in INLINEFORM2 INLINEFORM3 is specific to INLINEFORM4 if it occurs more frequently in INLINEFORM5 than in the entire set INLINEFORM6 hence distinguishing this community from the rest A word INLINEFORM7 whose occurrence is decoupled from INLINEFORM8 and thus has INLINEFORM9 close to 0 is said to be generic We compute values of INLINEFORM0 for each time period INLINEFORM1 in INLINEFORM2 in the above description we drop the timebased subscripts for clarity Volatility We quantify the volatility INLINEFORM0 of INLINEFORM1 to INLINEFORM2 as the PMI of INLINEFORM3 and INLINEFORM4 relative to INLINEFORM5 the entire history of INLINEFORM6 INLINEFORM7 A word INLINEFORM0 is volatile at time INLINEFORM1 in INLINEFORM2 if it occurs more frequently at INLINEFORM3 than in the entire history INLINEFORM4 behaving as a fad within a small window of time A word that occurs with similar frequency across time and hence has INLINEFORM5 close to 0 is said to be stable Extending to utterances Using our wordlevel primitives we define the specificity of an utterance INLINEFORM0 in INLINEFORM1 INLINEFORM2 as the average specificity of each word in the utterance The volatility of utterances is defined analogously Having described these wordlevel measures we now proceed to establish the primary axes of our typology Distinctiveness A community with a very distinctive identity will tend to have distinctive interests expressed through specialized language Formally we define the distinctiveness of a community INLINEFORM0 as the average specificity of all utterances in INLINEFORM1 We refer to a community with a less distinctive identity as being generic Dynamicity A highly dynamic community constantly shifts interests from one time window to another and these temporal variations are reflected in its use of volatile language Formally we define the dynamicity of a community INLINEFORM0 as the average volatility of all utterances in INLINEFORM1 We refer to a community whose language is relatively consistent throughout time as being stable In our subsequent analyses we focus mostly on examing the average distinctiveness and dynamicity of a community over time denoted INLINEFORM0 and INLINEFORM1 We now explain how our typology can be applied to the particular setting of Reddit and describe the overall behaviour of our linguistic axes in this context Dataset description Reddit is a popular website where users form and participate in discussionbased communities called subreddits Within these communities users post contentsuch as images URLs or questionswhich often spark vibrant lengthy discussions in threadbased comment sections The website contains many highly active subreddits with thousands of active subscribers These communities span an extremely rich variety of topical interests as represented by the examples described earlier They also vary along a rich multitude of structural dimensions such as the number of users the amount of conversation and social interaction and the social norms determining which types of content become popular The diversity and scope of Reddits multicommunity ecosystem make it an ideal landscape in which to closely examine the relation between varying community identities and social dynamics Our full dataset consists of all subreddits on Reddit from January 2013 to December 2014 for which there are at least 500 words in the vocabulary used to estimate our measures in at least 4 months of the subreddits history We compute our measures over the comments written by users in a community in time windows of months for each sufficiently active month and manually remove communities where the bulk of the contributions are in a foreign language This results in 283 communities INLINEFORM0 for a total of 4872 communitymonths INLINEFORM1 Estimating linguistic measures We estimate word frequencies INLINEFORM0 and by extension each downstream measure in a carefully controlled manner in order to ensure we capture robust and meaningful linguistic behaviour First we only consider toplevel comments which are initial responses to a post as the content of lowerlevel responses might reflect conventions of dialogue more than a communitys highlevel interests Next in order to prevent a few highly active users from dominating our frequency estimates we count each unique word once per user ignoring successive uses of the same word by the same user This ensures that our wordlevel characterizations are not skewed by a small subset of highly active contributors In our subsequent analyses we will only look at these measures computed over the nouns used in comments In principle our framework can be applied to any choice of vocabulary However in the case of Reddit using nouns provides a convenient degree of interpretability We can easily understand the implication of a community preferentially mentioning a noun such as gamer or feminist but interpreting the overuse of verbs or function words such as take or of is less straightforward Additionally in focusing on nouns we adopt the view emphasized in modern third wave accounts of sociolinguistic variation that stylistic variation is inseparable from topical content BIBREF23 In the case of online communities the choice of what people choose to talk about serves as a primary signal of social identity That said a typology based on more purely stylistic differences is an interesting avenue for future work Accounting for rare words One complication when using measures such as PMI which are based off of ratios of frequencies is that estimates for very infrequent words could be overemphasized BIBREF24 Words that only appear a few times in a community tend to score at the extreme ends of our measures eg as highly specific or highly generic obfuscating the impact of more frequent words in the community To address this issue we discard the long tail of infrequent words in our analyses using only the top 5th percentile of words by frequency within each INLINEFORM0 to score comments and communities Typology output on Reddit The distribution of INLINEFORM0 and INLINEFORM1 across Reddit communities is shown in Figure FIGREF3 B along with examples of communities at the extremes of our typology We find that interpretable groupings of communities emerge at various points within our axes For instance highly distinctive and dynamic communities tend to focus on rapidlyupdating interests like sports teams and games while generic and consistent communities tend to be large linksharing hubs where users generally post content with no clear dominating themes More examples of communities at the extremes of our typology are shown in Table TABREF9 We note that these groupings capture abstract properties of a communitys content that go beyond its topic For instance our typology relates topically contrasting communities such as yugioh which is about a popular trading card game and Seahawks through the shared trait that their content is particularly distinctive Additionally the axes can clarify differences between topically similar communities while startrek and thewalkingdead both focus on TV shows startrek is less dynamic than the median community while thewalkingdead is among the most dynamic communities as the show was still airing during the years considered We have seen that our typology produces qualitatively satisfying groupings of communities according to the nature of their collective identity This section shows that there is an informative and highly predictive relationship between a communitys position in this typology and its user engagement patterns We find that communities with distinctive and dynamic identities have higher rates of user engagement and further show that a communitys position in our identitybased landscape holds important predictive information that is complementary to a strong activity baseline In particular user retention is one of the most crucial aspects of engagement and is critical to community maintenance BIBREF2 We quantify how successful communities are at retaining users in terms of both short and longterm commitment Our results indicate that rates of user retention vary drastically yet systematically according to how distinctive and dynamic a community is Figure FIGREF3 We find a strong explanatory relationship between the temporal consistency of a communitys identity and rates of user engagement dynamic communities that continually update and renew their discussion content tend to have far higher rates of user engagement The relationship between distinctiveness and engagement is less universal but still highly informative niche communities tend to engender strong focused interest from users at one particular point in time though this does not necessarily translate into longterm retention We find that dynamic communities such as Seahawks or starcraft have substantially higher rates of monthly user retention than more stable communities Spearmans INLINEFORM0 070 INLINEFORM1 0001 computed with community points averaged over months Figure FIGREF11 A left Similarly more distinctive communities like Cooking and Naruto exhibit moderately higher monthly retention rates than more generic communities Spearmans INLINEFORM2 033 INLINEFORM3 0001 Figure FIGREF11 A right Monthly retention is formally defined as the proportion of users who contribute in month INLINEFORM0 and then return to contribute again in month INLINEFORM1 Each monthly datapoint is treated as unique and the trends in Figure FIGREF11 show 95 bootstrapped confidence intervals clusterresampled at the level of subreddit BIBREF25 to account for differences in the number of months each subreddit contributes to the data Importantly we find that in the task of predicting communitylevel user retention our identitybased typology holds additional predictive value on top of strong baseline features based on communitysize contributing users and activity levels mean contributions per user which are commonly used for churn prediction BIBREF7 We compared outofsample predictive performance via leaveonecommunityout cross validation using random forest regressors with ensembles of size 100 and otherwise default hyperparameters BIBREF26 A model predicting average monthly retention based on a communitys average distinctiveness and dynamicity achieves an average mean squared error INLINEFORM0 of INLINEFORM1 and INLINEFORM2 while an analogous model predicting based on a communitys size and average activity level both logtransformed achieves INLINEFORM4 and INLINEFORM5 The difference between the two models is not statistically significant INLINEFORM6 Wilcoxon signedrank test However combining features from both models results in a large and statistically significant improvement over each independent model INLINEFORM7 INLINEFORM8 INLINEFORM9 Bonferronicorrected pairwise Wilcoxon tests These results indicate that our typology can explain variance in communitylevel retention rates and provides information beyond what is present in standard activitybased features As with monthly retention we find a strong positive relationship between a communitys dynamicity and the average number of months that a user will stay in that community Spearmans INLINEFORM0 041 INLINEFORM1 0001 computed over all community points Figure FIGREF11 B left This verifies that the shortterm trend observed for monthly retention translates into longerterm engagement and suggests that longterm user retention might be strongly driven by the extent to which a community continually provides novel content Interestingly there is no significant relationship between distinctiveness and longterm engagement Spearmans INLINEFORM2 003 INLINEFORM3 077 Figure FIGREF11 B right Thus while highly distinctive communities like RandomActsOfMakeup may generate focused commitment from users over a short period of time such communities are unlikely to retain longterm users unless they also have sufficiently dynamic content To measure user tenures we focused on one slice of data May 2013 and measured how many months a user spends in each community on averagethe average number of months between a users first and last comment in each community We have activity data up until May 2015 so the maximum tenure is 24 months in this setup which is exceptionally long relative to the average community member throughout our entire data less than INLINEFORM0 of users have tenures of more than 24 months in any community The previous section shows that there is a strong connection between the nature of a communitys identity and its basic user engagement patterns In this section we probe the relationship between a communitys identity and how permeable or accessible it is to outsiders We measure this phenomenon using what we call the acculturation gap which compares the extent to which engaged vs nonengaged users employ communityspecific language While previous work has found this gap to be large and predictive of future user engagement in two beerreview communities BIBREF5 we find that the size of the acculturation gap depends strongly on the nature of a communitys identity with the gap being most pronounced in stable highly distinctive communities Figure FIGREF13 This finding has important implications for our understanding of online communities Though many works have analyzed the dynamics of linguistic belonging in online communities BIBREF16 BIBREF28 BIBREF5 BIBREF17 our results suggest that the process of linguistically fitting in is highly contingent on the nature of a communitys identity At one extreme in generic communities like pics or worldnews there is no distinctive linguistic identity for users to adopt To measure the acculturation gap for a community we follow DanescuNiculescuMizil et al danescuniculescumizilno2013 and build snapshot language models SLMs for each community which capture the linguistic state of a community at one point of time Using these language models we can capture how linguistically close a particular utterance is to the community by measuring the crossentropy of this utterance relative to the SLM DISPLAYFORM0 where INLINEFORM0 is the probability assigned to bigram INLINEFORM1 from comment INLINEFORM2 in communitymonth INLINEFORM3 We build the SLMs by randomly sampling 200 active usersdefined as users with at least 5 comments in the respective community and month For each of these 200 active users we select 5 random 10word spans from 5 unique comments To ensure robustness and maximize data efficiency we construct 100 SLMs for each communitymonth pair that has enough data bootstrapresampling from the set of active users We compute a basic measure of the acculturation gap for a communitymonth INLINEFORM0 as the relative difference of the crossentropy of comments by users active in INLINEFORM1 with that of singleton comments by outsidersie users who only ever commented once in INLINEFORM2 but who are still active in Reddit in general DISPLAYFORM0 INLINEFORM0 denotes the distribution over singleton comments INLINEFORM1 denotes the distribution over comments from users active in INLINEFORM2 and INLINEFORM3 the expected values of the crossentropy over these respective distributions For each bootstrapsampled SLM we compute the crossentropy of 50 comments by active users 10 comments from 5 randomly sampled active users who were not used to construct the SLM and 50 comments from randomlysampled outsiders Figure FIGREF13 A shows that the acculturation gap varies substantially with how distinctive and dynamic a community is Highly distinctive communities have far higher acculturation gaps while dynamicity exhibits a nonlinear relationship relatively stable communities have a higher linguistic entry barrier as do very dynamic ones Thus in communities like IAmA a general QA forum that are very generic with content that is highly but not extremely dynamic outsiders are at no disadvantage in matching the communitys language In contrast the acculturation gap is large in stable distinctive communities like Cooking that have consistent communityspecific language The gap is also large in extremely dynamic communities like Seahawks which perhaps require more attention or interest on the part of active users to keep uptodate with recent trends in content These results show that phenomena like the acculturation gap which were previously observed in individual communities BIBREF28 BIBREF5 cannot be easily generalized to a larger heterogeneous set of communities At the same time we see that structuring the space of possible communities enables us to observe systematic patterns in how such phenomena vary Through the acculturation gap we have shown that communities exhibit large yet systematic variations in their permeability to outsiders We now turn to understanding the divide in commenting behaviour between outsiders and active community members at a finer granularity by focusing on two particular ways in which such gaps might manifest among users through different levels of engagement with specific content and with temporally volatile content Echoing previous results we find that community type mediates the extent and nature of the divide in content affinity While in distinctive communities active members have a higher affinity for both communityspecific content and for highly volatile content the opposite is true for generic communities where it is the outsiders who engage more with volatile content We quantify these divides in content affinity by measuring differences in the language of the comments written by active users and outsiders Concretely for each community INLINEFORM0 we define the specificity gap INLINEFORM1 as the relative difference between the average specificity of comments authored by active members and by outsiders where these measures are macroaveraged over users Large positive INLINEFORM2 then occur in communities where active users tend to engage with substantially more communityspecific content than outsiders We analogously define the volatility gap INLINEFORM0 as the relative difference in volatilities of active member and outsider comments Large positive values of INLINEFORM1 characterize communities where active users tend to have more volatile interests than outsiders while negative values indicate communities where active users tend to have more stable interests We find that in 94 of communities INLINEFORM0 indicating somewhat unsurprisingly that in almost all communities active users tend to engage with more communityspecific content than outsiders However the magnitude of this divide can vary greatly for instance in Homebrewing which is dedicated to brewing beer the divide is very pronounced INLINEFORM1 033 compared to funny a large hub where users share humorous content INLINEFORM2 0011 The nature of the volatility gap is comparatively more varied In Homebrewing INLINEFORM0 016 as in 68 of communities active users tend to write more volatile comments than outsiders INLINEFORM1 0 However communities like funny INLINEFORM2 016 where active users contribute relatively stable comments compared to outsiders INLINEFORM3 0 are also wellrepresented on Reddit To understand whether these variations manifest systematically across communities we examine the relationship between divides in content affinity and community type In particular following the intuition that active users have a relatively high affinity for a communitys niche we expect that the distinctiveness of a community will be a salient mediator of specificity and volatility gaps Indeed we find a strong correlation between a communitys distinctiveness and its specificity gap Spearmans INLINEFORM0 034 INLINEFORM1 0001 We also find a strong correlation between distinctiveness and community volatility gaps Spearmans INLINEFORM0 053 INLINEFORM1 0001 In particular we see that among the most distinctive communities ie the top third of communities by distinctiveness active users tend to write more volatile comments than outsiders mean INLINEFORM2 0098 while across the most generic communities ie the bottom third active users tend to write more stable comments mean INLINEFORM3 0047 MannWhitney U test INLINEFORM4 0001 The relative affinity of outsiders for volatile content in these communities indicates that temporally ephemeral content might serve as an entry point into such a community without necessarily engaging users in the long term Our languagebased typology and analysis of user engagement draws on and contributes to several distinct research threads in addition to the many foundational studies cited in the previous sections Multicommunity studies Our investigation of user engagement in multicommunity settings follows prior literature which has examined differences in user and community dynamics across various online groups such as email listservs Such studies have primarily related variations in user behaviour to structural features such as group size and volume of content BIBREF30 BIBREF31 BIBREF32 BIBREF33 In focusing on the linguistic content of communities we extend this research by providing a contentbased framework through which user engagement can be examined Reddit has been a particularly useful setting for studying multiple communities in prior work Such studies have mostly focused on characterizing how individual users engage across a multicommunity platform BIBREF34 BIBREF35 or on specific user engagement patterns such as loyalty to particular communities BIBREF22 We complement these studies by seeking to understand how features of communities can mediate a broad array of user engagement patterns within them Typologies of online communities Prior attempts to typologize online communities have primarily been qualitative and based on handdesigned categories making them difficult to apply at scale These typologies often hinge on having some welldefined function the community serves such as supporting a business or nonprofit cause BIBREF36 which can be difficult or impossible to identify in massive anonymous multicommunity settings Other typologies emphasize differences in communication platforms and other functional requirements BIBREF37 BIBREF38 which are important but preclude analyzing differences between communities within the same multicommunity platform Similarly previous computational methods of characterizing multiple communities have relied on the presence of markers such as affixes in community names BIBREF35 or platformspecific affordances such as evaluation mechanisms BIBREF39 Our typology is also distinguished from community detection techniques that rely on structural or functional categorizations BIBREF40 BIBREF41 While the focus of those studies is to identify and characterize subcommunities within a larger social network our typology provides a characterization of predefined communities based on the nature of their identity Broader work on collective identity Our focus on community identity dovetails with a long line of research on collective identity and user engagement in both online and offline communities BIBREF42 BIBREF1 BIBREF2 These studies focus on individuallevel psychological manifestations of collective or social identity and their relationship to user engagement BIBREF42 BIBREF43 BIBREF44 BIBREF0 In contrast we seek to characterize community identities at an aggregate level and in an interpretable manner with the goal of systematically organizing the diverse space of online communities Typologies of this kind are critical to these broader socialpsychological studies of collective identity they allow researchers to systematically analyze how the psychological manifestations and implications of collective identity vary across diverse sets of communities Our current understanding of engagement patterns in online communities is patched up from glimpses offered by several disparate studies focusing on a few individual communities This work calls into attention the need for a method to systematically reason about similarities and differences across communities By proposing a way to structure the multicommunity space we find not only that radically contrasting engagement patterns emerge in different parts of this space but also that this variation can be at least partly explained by the type of identity each community fosters Our choice in this work is to structure the multicommunity space according to a typology based on community identity as reflected in language use We show that this effectively explains crosscommunity variation of three different user engagement measuresretention acculturation and content affinityand complements measures based on activity and size with additional interpretable information For example we find that in niche communities established members are more likely to engage with volatile content than outsiders while the opposite is true in generic communities Such insights can be useful for community maintainers seeking to understand engagement patterns in their own communities One main area of future research is to examine the temporal dynamics in the multicommunity landscape By averaging our measures of distinctiveness and dynamicity across time our present study treated community identity as a static property However as communities experience internal changes and respond to external events we can expect the nature of their identity to shift as well For instance the relative consistency of harrypotter may be disrupted by the release of a new novel while Seahawks may foster different identities during and between football seasons Conversely a communitys type may also mediate the impact of new events Moving beyond a static view of community identity could enable us to better understand how temporal phenomena such as linguistic change manifest across different communities and also provide a more nuanced view of user engagementfor instance are communities more welcoming to newcomers at certain points in their lifecycle Another important avenue of future work is to explore other ways of mapping the landscape of online communities For example combining structural properties of communities BIBREF40 with topical information BIBREF35 and with our identitybased measures could further characterize and explain variations in user engagement patterns Furthermore extending the present analyses to even more diverse communities supported by different platforms eg GitHub StackExchange Wikipedia could enable the characterization of more complex behavioral patterns such as collaboration and altruism which become salient in different multicommunity landscapes The authors thank Liye Fu Jack Hessel David Jurgens and Lillian Lee for their helpful comments This research has been supported in part by a Discovery and Innovation Research Seed Award from the Office of the Vice Provost for Research at Cornell NSF CNS1010921 IIS1149837 IIS1514268 NIH BD2K ARO MURI DARPA XDATA DARPA SIMPLEX DARPA NGS2 Stanford Data Science Initiative SAP Stanford Graduate Fellowship NSERC PGSD Boeing Lightspeed and Volkswagen </s>
<s>[INST] Question Answering based Clinical Text Structuring Using Pre-trained Language Model [/INST] Clinical text structuring CTS is a critical task for fetching medical research data from electronic health records EHRs where structural patient medical data such as whether the patient has specific symptoms diseases or what the tumor size is how far from the tumor is cut at during the surgery or what the specific laboratory test result is are obtained It is important to extract structured data from clinical text because biomedical systems or biomedical researches greatly rely on structured data but they cannot obtain them directly In addition clinical text often contains abundant healthcare information CTS is able to provide largescale extracted structured data for enormous downstream clinical researches However endtoend CTS is a very challenging task Different CTS tasks often have nonuniform output formats such as specificclass classifications eg tumor stage strings in the original text eg result for a laboratory test and inferred values from part of the original text eg calculated tumor size Researchers have to construct different models for it which is already costly and hence it calls for a lot of labeled data for each model Moreover labeling necessary amount of data for training neural network requires expensive labor cost To handle it researchers turn to some rulebased structuring methods which often have lower labor cost Traditionally CTS tasks can be addressed by rule and dictionary based methods BIBREF0 BIBREF1 BIBREF2 taskspecific endtoend methods BIBREF3 BIBREF4 BIBREF5 BIBREF6 and pipeline methods BIBREF7 BIBREF8 BIBREF9 Rule and dictionary based methods suffer from costly humandesigned extraction rules while taskspecific endtoend methods have nonuniform output formats and require taskspecific training dataset Pipeline methods break down the entire process into several pieces which improves the performance and generality However when the pipeline depth grows error propagation will have a greater impact on the performance To reduce the pipeline depth and break the barrier of nonuniform output formats we present a question answering based clinical text structuring QACTS task see Fig FIGREF1 Unlike the traditional CTS task our QACTS task aims to discover the most related text from original paragraph text For some cases it is already the final answer in deed eg extracting substring While for other cases it needs several steps to obtain the final answer such as entity names conversion and negative words recognition Our presented QACTS task unifies the output format of the traditional CTS task and make the training data shareable thus enriching the training data The main contributions of this work can be summarized as follows We first present a question answering based clinical text structuring QACTS task which unifies different specific tasks and make dataset shareable We also propose an effective model to integrate clinical named entity information into pretrained language model Experimental results show that QACTS task leads to significant improvement due to shared dataset Our proposed model also achieves significantly better performance than the strong baseline methods In addition we also show that twostage training mechanism has a great improvement on QACTS task The rest of the paper is organized as follows We briefly review the related work on clinical text structuring in Section SECREF2 Then we present question answer based clinical text structuring task in Section SECREF3 In Section SECREF4 we present an effective model for this task Section SECREF5 is devoted to computational studies and several investigations on the key issues of our proposed model Finally conclusions are given in Section SECREF6 Clinical text structuring is a final problem which is highly related to practical applications Most of existing studies are casebycase Few of them are developed for the general purpose structuring task These studies can be roughly divided into three categories rule and dictionary based methods taskspecific endtoend methods and pipeline methods Rule and dictionary based methods BIBREF0 BIBREF1 BIBREF2 rely extremely on heuristics and handcrafted extraction rules which is more of an art than a science and incurring extensive trialanderror experiments Fukuda et al BIBREF0 identified protein names from biological papers by dictionaries and several features of protein names Wang et al BIBREF1 developed some linguistic rules ie normalisedexpanded term matching and substring term matching to map specific terminology to SNOMED CT Song et al BIBREF2 proposed a hybrid dictionarybased bioentity extraction technique and expands the bioentity dictionary by combining different data sources and improves the recall rate through the shortest path edit distance algorithm This kind of approach features its interpretability and easy modifiability However with the increase of the rule amount supplementing new rules to existing system will turn to be a rule disaster Taskspecific endtoend methods BIBREF3 BIBREF4 use large amount of data to automatically model the specific task Topaz et al BIBREF3 constructed an automated wound information identification model with five output Tan et al BIBREF4 identified patients undergoing radical cystectomy for bladder cancer Although they achieved good performance none of their models could be used to another task due to output format difference This makes building a new model for a new task a costly job Pipeline methods BIBREF7 BIBREF8 BIBREF9 break down the entire task into several basic natural language processing tasks Bill et al BIBREF7 focused on attributes extraction which mainly relied on dependency parsing and named entity recognition BIBREF10 BIBREF11 BIBREF12 Meanwhile Fonferko et al BIBREF9 used more components like noun phrase chunking BIBREF13 BIBREF14 BIBREF15 partofspeech tagging BIBREF16 BIBREF17 BIBREF18 sentence splitter named entity linking BIBREF19 BIBREF20 BIBREF21 relation extraction BIBREF22 BIBREF23 This kind of method focus on language itself so it can handle tasks more general However as the depth of pipeline grows it is obvious that error propagation will be more and more serious In contrary using less components to decrease the pipeline depth will lead to a poor performance So the upper limit of this method depends mainly on the worst component Recently some works focused on pretrained language representation models to capture language information from text and then utilizing the information to improve the performance of specific natural language processing tasks BIBREF24 BIBREF25 BIBREF26 BIBREF27 which makes language model a shared model to all natural language processing tasks Radford et al BIBREF24 proposed a framework for finetuning pretrained language model Peters et al BIBREF25 proposed ELMo which concatenates forward and backward language models in a shallow manner Devlin et al BIBREF26 used bidirectional Transformers to model deep interactions between the two directions Yang et al BIBREF27 replaced the fixed forward or backward factorization order with all possible permutations of the factorization order and avoided using the MASK tag which causes pretrainfinetune discrepancy that BERT is subject to The main motivation of introducing pretrained language model is to solve the shortage of labeled data and polysemy problem Although polysemy problem is not a common phenomenon in biomedical domain shortage of labeled data is always a nontrivial problem Lee et al BIBREF28 applied BERT on largescale biomedical unannotated data and achieved improvement on biomedical named entity recognition relation extraction and question answering Kim et al BIBREF29 adapted BioBERT into multitype named entity recognition and discovered new entities Both of them demonstrates the usefulness of introducing pretrained language model into biomedical domain Given a sequence of paragraph text Xx1 x2 xn clinical text structuring CTS can be regarded to extract or generate a keyvalue pair where key Q is typically a query term such as proximal resection margin and value V is a result of query term Q according to the paragraph text X Generally researchers solve CTS problem in two steps Firstly the answerrelated text is pick out And then several steps such as entity names conversion and negative words recognition are deployed to generate the final answer While final answer varies from task to task which truly causes nonuniform output formats finding the answerrelated text is a common action among all tasks Traditional methods regard both the steps as a whole In this paper we focus on finding the answerrelated substring Xs Xi Xi1 Xi2 Xj 1 i j n from paragraph text X For example given sentence UTF8gkai115cm170cm60cm80cm Distal gastrectomy specimen measuring 115cm in length along the lesser curvature 170cm in length along the greater curvature 60cm from the proximal resection margin and 80cm from the distal resection margin and query UTF8gkaiproximal resection margin the answer should be 60cm which is located in original text from index 32 to 37 With such definition it unifies the output format of CTS tasks and therefore make the training data shareable in order to reduce the training data quantity requirement Since BERT BIBREF26 has already demonstrated the usefulness of shared model we suppose extracting commonality of this problem and unifying the output format will make the model more powerful than dedicated model and meanwhile for a specific clinical task use the data for other tasks to supplement the training data In this section we present an effective model for the question answering based clinical text structuring QACTS As shown in Fig FIGREF8 paragraph text X is first passed to a clinical named entity recognition CNER model BIBREF12 to capture named entity information and obtain onehot CNER output tagging sequence for query text Inq and paragraph text Int with BIEOS Begin Inside End Outside Single tag scheme Inq and Int are then integrated together into In Meanwhile the paragraph text X and query text Q are organized and passed to contextualized representation model which is pretrained language model BERT BIBREF26 here to obtain the contextualized representation vector Vs of both text and query Afterwards Vs and In are integrated together and fed into a feed forward network to calculate the start and end index of answerrelated text Here we define this calculation problem as a classification for each word to be the start or end word For any clinical freetext paragraph X and query Q contextualized representation is to generate the encoded vector of both of them Here we use pretrained language model BERTbase BIBREF26 model to capture contextual information The text input is constructed as CLS Q SEP X SEP For Chinese sentence each word in this input will be mapped to a pretrained embedding ei To tell the model Q and X is two different sentence a sentence type input is generated which is a binary label sequence to denote what sentence each character in the input belongs to Positional encoding and mask matrix is also constructed automatically to bring in absolute position information and eliminate the impact of zero padding respectively Then a hidden vector Vs which contains both query and text information is generated through BERTbase model Since BERT is trained on general corpus its performance on biomedical domain can be improved by introducing biomedical domainspecific features In this paper we introduce clinical named entity information into the model The CNER task aims to identify and classify important clinical terms such as diseases symptoms treatments exams and body parts from Chinese EHRs It can be regarded as a sequence labeling task A CNER model typically outputs a sequence of tags Each character of the original sentence will be tagged a label following a tag scheme In this paper we recognize the entities by the model of our previous work BIBREF12 but trained on another corpus which has 44 entity types including operations numbers unit words examinations symptoms negative words etc An illustrative example of named entity information sequence is demonstrated in Table TABREF2 In Table TABREF2 UTF8gkai is tagged as an operation 115 is a number word and cm is an unit word The named entity tag sequence is organized in onehot type We denote the sequence for clinical sentence and query term as Int and Inq respectively There are two ways to integrate two named entity information vectors Int and Inq or hidden contextualized representation Vs and named entity information In where In Int Inq The first one is to concatenate them together because they have sequence output with a common dimension The second one is to transform them into a new hidden representation For the concatenation method the integrated representation is described as follows While for the transformation method we use multihead attention BIBREF30 to encode the two vectors It can be defined as follows where h is the number of heads and Wo is used to projects back the dimension of concatenated matrix Attention denotes the traditional attention and it can be defined as follows where dk is the length of hidden vector The final step is to use integrated representation Hi to predict the start and end index of answerrelated text Here we define this calculation problem as a classification for each word to be the start or end word We use a feed forward network FFN to compress and calculate the score of each word Hf which makes the dimension to leftlangle ls 2rightrangle where ls denotes the length of sequence Then we permute the two dimensions for softmax calculation The calculation process of loss function can be defined as followed where Os softmaxpermuteHf0 denotes the probability score of each word to be the start word and similarly Oe softmaxpermuteHf1 denotes the end ys and ye denotes the true answer of the output for start word and end word respectively Twostage training mechanism is previously applied on bilinear model in finegrained visual recognition BIBREF31 BIBREF32 BIBREF33 Two CNNs are deployed in the model One is trained at first for coarsegraind features while freezing the parameter of the other Then unfreeze the other one and train the entire model in a low learning rate for fetching finegrained features Inspired by this and due to the large amount of parameters in BERT model to speed up the training process we fine tune the BERT model with new prediction layer first to achieve a better contextualized representation performance Then we deploy the proposed model and load the fine tuned BERT weights attach named entity information layers and retrain the model In this section we devote to experimentally evaluating our proposed task and approach The best results in tables are in bold Our dataset is annotated based on Chinese pathology reports provided by the Department of Gastrointestinal Surgery Ruijin Hospital It contains 17833 sentences 826987 characters and 2714 questionanswer pairs All questionanswer pairs are annotated and reviewed by four clinicians with three types of questions namely tumor size proximal resection margin and distal resection margin These annotated instances have been partitioned into 1899 training instances 12412 sentences and 815 test instances 5421 sentences Each instance has one or several sentences Detailed statistics of different types of entities are listed in Table TABREF20 In the following experiments two widelyused performance measures ie EMscore BIBREF34 and macroaveraged F1score BIBREF35 are used to evaluate the methods The Exact Match EMscore metric measures the percentage of predictions that match any one of the ground truth answers exactly The F1score metric is a looser metric measures the average overlap between the prediction and ground truth answer To implement deep neural network models we utilize the Keras library BIBREF36 with TensorFlow BIBREF37 backend Each model is run on a single NVIDIA GeForce GTX 1080 Ti GPU The models are trained by Adam optimization algorithm BIBREF38 whose parameters are the same as the default settings except for learning rate set to 5times 105 Batch size is set to 3 or 4 due to the lack of graphical memory We select BERTbase as the pretrained language model in this paper Due to the high cost of pretraining BERT language model we directly adopt parameters pretrained by Google in Chinese general corpus The named entity recognition is applied on both pathology report texts and query texts Since BERT has already achieved the stateoftheart performance of questionanswering in this section we compare our proposed model with stateoftheart question answering models ie QANet BIBREF39 and BERTBase BIBREF26 As BERT has two versions BERTBase and BERTLarge due to the lack of computational resource we can only compare with BERTBase model instead of BERTLarge Prediction layer is attached at the end of the original BERTBase model and we fine tune it on our dataset In this section the named entity integration method is chosen to pure concatenation Concatenate the named entity information on pathology report text and query text first and then concatenate contextualized representation and concatenated named entity information Comparative results are summarized in Table TABREF23 Table TABREF23 indicates that our proposed model achieved the best performance both in EMscore and F1score with EMscore of 9184 and F1score of 9375 QANet outperformed BERTBase with 356 score in F1score but underperformed it with 075 score in EMscore Compared with BERTBase our model led to a 564 performance improvement in EMscore and 369 in F1score Although our model didnt outperform much with QANet in F1score only 013 our model significantly outperformed it with 639 score in EMscore To further investigate the effects of named entity information and twostage training mechanism for our model we apply ablation analysis to see the improvement brought by each of them where times refers to removing that part from our model As demonstrated in Table TABREF25 with named entity information enabled twostage training mechanism improved the result by 436 in EMscore and 38 in F1score Without twostage training mechanism named entity information led to an improvement by 128 in EMscore but it also led to a weak deterioration by 012 in F1score With both of them enabled our proposed model achieved a 564 score improvement in EMscore and a 369 score improvement in F1score The experimental results show that both named entity information and twostage training mechanism are helpful to our model There are two methods to integrate named entity information into existing model we experimentally compare these two integration methods As named entity recognition has been applied on both pathology report text and query text there will be two integration here One is for two named entity information and the other is for contextualized representation and integrated named entity information For multihead attention BIBREF30 we set heads number h 16 with 256dimension hidden vector size for each head From Table TABREF27 we can observe that applying concatenation on both periods achieved the best performance on both EMscore and F1score Unfortunately applying multihead attention on both period one and period two can not reach convergence in our experiments This probably because it makes the model too complex to train The difference on other two methods are the order of concatenation and multihead attention Applying multihead attention on two named entity information Int and Inq first achieved a better performance with 8987 in EMscore and 9288 in F1score Applying Concatenation first can only achieve 8074 in EMscore and 8442 in F1score This is probably due to the processing depth of hidden vectors and dataset size BERTs output has been modified after many layers but named entity information representation is very close to input With big amount of parameters in multihead attention it requires massive training to find out the optimal parameters However our dataset is significantly smaller than what pretrained BERT uses This probably can also explain why applying multihead attention method on both periods can not converge Although Table TABREF27 shows the best integration method is concatenation multihead attention still has great potential Due to the lack of computational resources our experiment fixed the head number and hidden vector size However tuning these hyper parameters may have impact on the result Tuning integration method and try to utilize larger datasets may give help to improving the performance To investigate how shared task and shared model can benefit we split our dataset by query types train our proposed model with different datasets and demonstrate their performance on different datasets Firstly we investigate the performance on model without twostage training and named entity information As indicated in Table TABREF30 The model trained by mixed data outperforms 2 of the 3 original tasks in EMscore with 8155 for proximal resection margin and 8685 for distal resection margin The performance on tumor size declined by 157 score in EMscore and 314 score in F1score but they were still above 90 069 and 037 score improvement in EMscore was brought by shared model for proximal and distal resection margin prediction Meanwhile F1score for those two tasks declined 311 and 077 score Then we investigate the performance on model with twostage training and named entity information In this experiment pretraining process only use the specific dataset not the mixed data From Table TABREF31 we can observe that the performance on proximal and distal resection margin achieved the best performance on both EMscore and F1score Compared with Table TABREF30 the best performance on proximal resection margin improved by 69 in EMscore and 794 in F1score Meanwhile the best performance on distal resection margin improved by 556 in EMscore and 632 in F1score Other performances also usually improved a lot This proves the usefulness of twostage training and named entity information as well Lastly we fine tune the model for each task with a pretrained parameter Table TABREF32 summarizes the result Add some explanations for the Table TABREF32 Comparing Table TABREF32 with Table TABREF31 using mixeddata pretrained parameters can significantly improve the model performance than taskspecific data trained model Except tumor size the result was improved by 052 score in EMscore 139 score in F1score for proximal resection margin and 26 score in EMscore 296 score in F1score for distal resection margin This proves mixeddata pretrained parameters can lead to a great benefit for specific task Meanwhile the model performance on other tasks which are not trained in the final stage was also improved from around 0 to 60 or 70 percent This proves that there is commonality between different tasks and our proposed QACTS task make this learnable In conclusion to achieve the best performance for a specific dataset pretraining the model in multiple datasets and then fine tuning the model on the specific dataset is the best way In this paper we present a question answering based clinical text structuring QACTS task which unifies different clinical text structuring tasks and utilize different datasets A novel model is also proposed to integrate named entity information into a pretrained language model and adapt it to QACTS task Initially sequential results of named entity recognition on both paragraph and query texts are integrated together Contextualized representation on both paragraph and query texts are transformed by a pretrained language model Then the integrated named entity information and contextualized representation are integrated together and fed into a feed forward network for final prediction Experimental results on realworld dataset demonstrate that our proposed model competes favorably with strong baseline models in all three specific tasks The shared task and shared model introduced by QACTS task has also been proved to be useful for improving the performance on most of the taskspecific datasets In conclusion the best way to achieve the best performance for a specific dataset is to pretrain the model in multiple datasets and then fine tune it on the specific dataset We would like to thank Ting Li and Xizhou Hong Ruijin Hospital who have helped us very much in data fetching and data cleansing This work is supported by the National Key RD Program of China for Precision Medical Research No 2018YFC0910500 </s>
<s>[INST] Progress and Tradeoffs in Neural Language Models [/INST] Deep learning has unquestionably advanced the state of the art in many natural language processing tasks from syntactic dependency parsing BIBREF0 to namedentity recognition BIBREF1 to machine translation BIBREF2 The same certainly applies to language modeling where recent advances in neural language models NLMs have led to dramatically better approaches as measured using standard metrics such as perplexity BIBREF3 BIBREF4 Specifically focused on language modeling this paper examines an issue that to our knowledge has not been explored advances in neural language models have come at a significant cost in terms of increased computational complexity Computing the probability of a token sequence using nonneural techniques requires a number of phrase lookups and perhaps a few arithmetic operations whereas model inference with NLMs require large matrix multiplications consuming perhaps millions of floating point operations FLOPs These performance tradeoffs are worth discussing In truth language models exist in a qualityperformance tradeoff space As model quality increases eg lower perplexity performance as measured in terms of energy consumption query latency etc tends to decrease For applications primarily running in the cloudsay machine translationpractitioners often solely optimize for the lowest perplexity This is because such applications are embarrassingly parallel and hence trivial to scale in a data center environment There are however applications of NLMs that require less onesided optimizations On mobile devices such as smartphones and tablets for example NLMs may be integrated into software keyboards for nextword prediction allowing much faster text entry Popular Android apps that enthusiastically tout this technology include SwiftKey and Swype The greater computational costs of NLMs lead to higher energy usage in model inference translating into shorter battery life In this paper we examine the qualityperformance tradeoff in the shift from nonneural to neural language models In particular we compare KneserNey smoothing widely accepted as the state of the art prior to NLMs to the best NLMs today The decrease in perplexity on standard datasets has been well documented BIBREF3 but to our knowledge no one has examined the performances tradeoffs With deployment on a mobile device in mind we evaluate energy usage and inference latency on a Raspberry Pi which shares the same ARM architecture as nearly all smartphones today We find that a 25 times reduction in perplexity on PTB comes at a staggering cost in terms of performance inference with NLMs takes 49 times longer and requires 32 times more energy Furthermore we find that impressive reductions in perplexity translate into at best modest improvements in nextword prediction which is arguable a better metric for evaluating software keyboards on a smartphone The contribution of this paper is the first known elucidation of this qualityperformance tradeoff Note that we refrain from prescriptive recommendations whether or not a tradeoff is worthwhile depends on the application Nevertheless NLP engineers should arguably keep these tradeoffs in mind when selecting a particular operating point BIBREF3 evaluate recent neural language models however their focus is not on the computational footprint of each model but rather the perplexity To further reduce perplexity many neural language model extensions exist such as continuous cache pointer BIBREF5 and mixture of softmaxes BIBREF6 Since our focus is on comparing core neural and nonneural approaches we disregard these extra optimizations techniques in all of our models Other work focus on designing lightweight models for resourceefficient inference on mobile devices BIBREF7 explore LSTMs BIBREF8 with binary weights for language modeling BIBREF9 examine shallow feedforward neural networks for natural language processing AWDLSTM BIBREF4 show that a simple threelayer LSTM with proper regularization and optimization techniques can achieve state of the art on various language modeling datasets surpassing more complex models Specifically BIBREF4 apply randomized backpropagation through time variational dropout activation regularization embedding dropout and temporal activation regularization A novel scheduler for optimization nonmonotonically triggered ASGD NTASGD is also introduced BIBREF4 name their threelayer LSTM model trained with such tricks AWDLSTM QuasiRecurrent Neural Networks Quasirecurrent neural networks QRNNs BIBREF10 achieve current state of the art in wordlevel language modeling BIBREF11 A quasirecurrent layer comprises two separate parts a convolution layer with three weights and a recurrent pooling layer Given an input mathbf X in mathbb Rk times n the convolution layer is
mathbf Z tanh mathbf Wz cdot mathbf X
mathbf F sigma mathbf Wf cdot mathbf X
mathbf O sigma mathbf Wo cdot mathbf X
where sigma denotes the sigmoid function cdot represents masked convolution across time and mathbf Wlbrace z f orbrace in mathbb Rm times k times r are convolution weights with k input channels m output channels and a window size of r In the recurrent pooling layer the convolution outputs are combined sequentially
mathbf ct mathbf ft odot mathbf ct1 1
mathbf ft odot mathbf zt
mathbf ht mathbf ot odot mathbf ct
Multiple QRNN layers can be stacked for deeper hierarchical representation with the output mathbf h1t being fed as the input into the subsequent layer In language modeling a fourlayer QRNN is a standard architecture BIBREF11 PerplexityRecall Scale Wordlevel perplexity does not have a strictly monotonic relationship with recallat k the fraction of top k predictions that contain the correct word A given R k imposes a weak minimum perplexity constraintthere are many free parameters that allow for large variability in the perplexity given a certain R k Consider the corpus choo choo train with an associated unigram model Ptextchoo 01 Ptexttrain 09 resulting in an R1 of 13 and perplexity of 48 Clearly R1 13 for all Ptextchoo le 05 thus perplexity can drop as low as 2 without affecting recall We conducted our experiments on Penn Treebank PTB BIBREF12 and WikiText103 WT103 BIBREF13 Preprocessed by BIBREF14 PTB contains 887K tokens for training 70K for validation and 78K for test with a vocabulary size of 10000 On the other hand WT103 comprises 103 million tokens for training 217K for validation and 245K for test spanning a vocabulary of 267K unique tokens For the neural language model we used a fourlayer QRNN BIBREF10 which achieves stateoftheart results on a variety of datasets such as WT103 BIBREF11 and PTB To compare against more common LSTM architectures we also evaluated AWDLSTM BIBREF4 on PTB For the nonneural approach we used a standard fivegram model with modified KneserNey smoothing BIBREF15 as explored in BIBREF16 on PTB We denote the QRNN models for PTB and WT103 as ptbqrnn and wt103qrnn respectively For each model we examined wordlevel perplexity R3 in nextword prediction latency msq and energy usage mJq To explore the perplexityrecall relationship we collected individual perplexity and recall statistics for each sentence in the test set The QRNN models followed the exact training procedure and architecture delineated in the official codebase from BIBREF11 For ptbqrnn we trained the model for 550 epochs using NTASGD BIBREF4 then finetuned for 300 epochs using ASGD BIBREF17 all with a learning rate of 30 throughout For wt103qrnn we followed BIBREF11 and trained the QRNN for 14 epochs using the Adam optimizer with a learning rate of 103 We also applied regularization techniques from BIBREF4 all the specific hyperparameters are the same as those in the repository Our model architecture consists of 400dimensional tied embedding weights BIBREF18 and four QRNN layers with 1550 hidden units per layer on PTB and 2500 per layer on WT103 Both QRNN models have window sizes of r2 for the first layer and r1 for the rest For the KN5 model we trained an offtheshelf fivegram model using the popular SRILM toolkit BIBREF19 We did not specify any special hyperparameters We trained the QRNNs with PyTorch 040 commit 1807bac on a Titan V GPU To evaluate the models under a resourceconstrained environment we deployed them on a Raspberry Pi 3 Model B running Raspbian Stretch 4941v7 The Raspberry Pi RPi is not only a standard platform but also a close surrogate to mobile phones using the same CortexA7 in many phones We then transferred the trained models to the RPi using the same frameworks for evaluation We plugged the RPi into a Watts Up Pro meter a power meter that can be read programatically over USB at a frequency of 1 Hz For the QRNNs we used the first 350 words of the test set and averaged the msquery and mJquery For KN5 we used the entire test set for evaluation since the latency was much lower To adjust for the base power load we subtracted idle power draw from energy usage For a different perspective we further evaluated all the models under a desktop environment using an i74790k CPU and Titan V GPU Because the base power load for powering a desktop is much higher than running neural language models we collected only latency statistics We used the entire test set since the QRNN runs quickly In addition to energy and latency another consideration for the NLP developer selecting an operating point is the cost of underlying hardware For our setup the RPi costs 35 USD the CPU costs 350 USD and the GPU costs 3000 USD To demonstrate the effectiveness of the QRNN models we present the results of past and current stateoftheart neural language models in Table 1 we report the Skip and AWDLSTM results as seen in the original papers while we report our QRNN results Skip LSTM denotes the fourlayer Skip LSTM in BIBREF3 BIBREF20 focus on Hebbian softmax a model extension techniqueRaeLSTM refers to their base LSTM model without any extensions In our results KN5 refers to the traditional fivegram model with modified KneserNey smoothing and AWD is shorthand for AWDLSTM Perplexityrecall scale In Figure 1 using KN5 as the model we plot the log perplexity cross entropy and R3 error 1 textR3 for every sentence in PTB and WT103 The horizontal clusters arise from multiple perplexity points representing the same R3 value as explained in Section Infrastructure We also observe that the perplexityrecall scale is nonlinearinstead log perplexity appears to have a moderate linear relationship with R3 error on PTB r085 and an even stronger relationship on WT103 r094 This is partially explained by WT103 having much longer sentences and thus less noisy statistics From Figure 1 we find that QRNN models yield strongly linear log perplexityrecall plots as well where r088 and r093 for PTB and WT103 respectively Note that due to the improved model quality over KN5 the point clouds are shifted downward compared to Figure 1 We conclude that log perplexity or cross entropy provides a more humanunderstandable indicator of R3 than perplexity does Overall these findings agree with those from BIBREF21 which explores the log perplexityword error rate scale in language modeling for speech recognition Qualityperformance tradeoff In Table 2 from left to right we report perplexity results on the validation and test sets R3 on test and finally perquery latency and energy usage On the RPi KN5 is both fast and powerefficient to run using only about 7 msquery and 6 mJquery for PTB Table 2 row 1 and 264 msq and 229 mJq on WT103 row 5 Taking 220 msquery and consuming 300 mJquery AWDLSTM and ptbqrnn are still viable for mobile phones The modern smartphone holds upwards of 10000 joules BIBREF22 and the latency is within usability standards BIBREF23 Nevertheless the models are still 49 times slower and 32 times more powerhungry than KN5 The wt103qrnn model is completely unusable on phones taking over 12 seconds per nextword prediction Neural models achieve perplexity drops of 6080 and R3 increases of 2234 but these improvements come at a much higher cost in latency and energy usage In Table 2 last two columns the desktop yields very different results the neural models on PTB rows 23 are 9 times slower than KN5 but the absolute latency is only 8 msq which is still much faster than what humans perceive as instantaneous BIBREF23 If a highend commodity GPU is available then the models are only twice as slow as KN5 is From row 5 even better results are noted with wt103qrnn On the CPU the QRNN is only 60 slower than KN5 is while the model is faster by 11 times on a GPU These results suggest that if only latency is considered under a commodity desktop environment the QRNN model is humanly indistinguishable from the KN5 model even without using GPU acceleration In the present work we describe and examine the tradeoff space between quality and performance for the task of language modeling Specifically we explore the qualityperformance tradeoffs between KN5 a nonneural approach and AWDLSTM and QRNN two neural language models We find that with decreased perplexity comes vastly increased computational requirements In one of the NLMs a perplexity reduction by 25 times results in a 49 times rise in latency and 32 times increase in energy usage when compared to KN5 </s>
<s>[INST] Stay On-Topic: Generating Context-specific Fake Restaurant Reviews [/INST] Automatically generated fake reviews have only recently become natural enough to fool human readers Yao et al BIBREF0 use a deep neural network a socalled 2layer LSTM BIBREF1 to generate fake reviews and concluded that these fake reviews look sufficiently genuine to fool native English speakers They train their model using real restaurant reviews from yelpcom BIBREF2 Once trained the model is used to generate reviews characterbycharacter Due to the generation methodology it cannot be easily targeted for a specific context meaningful side information Consequently the review generation process may stray offtopic For instance when generating a review for a Japanese restaurant in Las Vegas the review generation process may include references to an Italian restaurant in Baltimore The authors of BIBREF0 apply a postprocessing step customization which replaces foodrelated words with more suitable ones sampled from the targeted restaurant The word replacement strategy has drawbacks it can miss certain words and replace others independent of their surrounding words which may alert savvy readers As an example when we applied the customization technique described in BIBREF0 to a review for a Japanese restaurant it changed the snippet garlic knots for breakfast with garlic knots for sushi We propose a methodology based on neural machine translation NMT that improves the generation process by defining a context for the each generated fake review Our context is a cleartext sequence of the review rating restaurant name city state and food tags eg Japanese Italian We show that our technique generates review that stay on topic We can instantiate our basic technique into several variants We vet them on Amazon Mechanical Turk and find that native English speakers are very poor at recognizing our fake generated reviews For one variant the participants performance is close to random the classaveraged Fscore of detection is INLINEFORM0 whereas random would be INLINEFORM1 given the 16 imbalance in the test Via a user study with experienced highly educated participants we compare this variant which we will henceforth refer to as NMTFake reviews with fake reviews generated using the charLSTMbased technique from BIBREF0 We demonstrate that NMTFake reviews constitute a new category of fake reviews that cannot be detected by classifiers trained only using previously known categories of fake reviews BIBREF0 BIBREF3 BIBREF4 Therefore NMTFake reviews may go undetected in existing online review sites To meet this challenge we develop an effective classifier that detects NMTFake reviews effectively 97 Fscore Our main contributions are Fake reviews Usergenerated content BIBREF5 is an integral part of the contemporary user experience on the web Sites like tripadvisorcom yelpcom and Google Play use userwritten reviews to provide rich information that helps other users choose where to spend money and time User reviews are used for rating services or products and for providing qualitative opinions User reviews and ratings may be used to rank services in recommendations Ratings have an affect on the outwards appearance Already 8 years ago researchers estimated that a onestar rating increase affects the business revenue by 5 9 on yelpcom BIBREF6 Due to monetary impact of usergenerated content some businesses have relied on socalled crowdturfing agents BIBREF7 that promise to deliver positive ratings written by workers to a customer in exchange for a monetary compensation Crowdturfing ethics are complicated For example Amazon community guidelines prohibit buying content relating to promotions but the act of writing fabricated content is not considered illegal nor is matching workers to customers BIBREF8 Year 2015 approximately 20 of online reviews on yelpcom were suspected of being fake BIBREF9 Nowadays usergenerated review sites like yelpcom use filters and fraudulent review detection techniques These factors have resulted in an increase in the requirements of crowdturfed reviews provided to review sites which in turn has led to an increase in the cost of highquality review Due to the cost increase researchers hypothesize the existence of neural networkgenerated fake reviews These neuralnetworkbased fake reviews are statistically different from humanwritten fake reviews and are not caught by classifiers trained on these BIBREF0 Detecting fake reviews can either be done on an individual level or as a systemwide detection tool ie regulation Detecting fake online content on a personal level requires knowledge and skills in critical reading In 2017 the National Literacy Trust assessed that young people in the UK do not have the skillset to differentiate fake news from real news BIBREF10 For example 20 of children that use online news sites in age group 1215 believe that all information on news sites are true Neural Networks Neural networks are function compositions that map input data through INLINEFORM0 subsequent layers DISPLAYFORM0 where the functions INLINEFORM0 are typically nonlinear and chosen by experts partly for known good performance on datasets and partly for simplicity of computational evaluation Language models LMs BIBREF11 are generative probability distributions that assign probabilities to sequences of tokens INLINEFORM1 DISPLAYFORM0 such that the language model can be used to predict how likely a specific token at time step INLINEFORM0 is based on the INLINEFORM1 previous tokens Tokens are typically either words or characters For decades deep neural networks were thought to be computationally too difficult to train However advances in optimization hardware and the availability of frameworks have shown otherwise BIBREF1 BIBREF12 Neural language models NLMs have been one of the promising application areas NLMs are typically various forms of recurrent neural networks RNNs which pass through the data sequentially and maintain a memory representation of the past tokens with a hidden context vector There are many RNN architectures that focus on different ways of updating and maintaining context vectors Long ShortTerm Memory units LSTM and Gated Recurrent Units GRUs are perhaps most popular Neural LMs have been used for freeform text generation In certain application areas the quality has been high enough to sometimes fool human readers BIBREF0 Encoderdecoder seq2seq models BIBREF13 are architectures of stacked RNNs which have the ability to generate output sequences based on input sequences The encoder network reads in a sequence of tokens and passes it to a decoder network a LM In contrast to simpler NLMs encoderdecoder networks have the ability to use additional context for generating text which enables more accurate generation of text Encoderdecoder models are integral in Neural Machine Translation NMT BIBREF14 where the task is to translate a source text from one language to another language NMT models additionally use beam search strategies to heuristically search the set of possible translations Training datasets are parallel corpora large sets of paired sentences in the source and target languages The application of NMT techniques for online machine translation has significantly improved the quality of translations bringing it closer to human performance BIBREF15 Neural machine translation models are efficient at mapping one expression to another onetoone mapping Researchers have evaluated these models for conversation generation BIBREF16 with mixed results Some researchers attribute poor performance to the use of the negative log likelihood cost function during training which emphasizes generation of highconfidence phrases rather than diverse phrases BIBREF17 The results are often generic text which lacks variation Li et al have suggested various augmentations to this among others suppressing typical responses in the decoder language model to promote response diversity BIBREF17 We discuss the attack model our generative machine learning method and controlling the generative process in this section Wang et al BIBREF7 described a model of crowdturfing attacks consisting of three entities customers who desire to have fake reviews for a particular target eg their restaurant on a particular platform eg Yelp agents who offer fake review services to customers and workers who are orchestrated by the agent to compose and post fake reviews Automated crowdturfing attacks ACA replace workers by a generative model This has several benefits including better economy and scalability human workers are more expensive and slower and reduced detectability agent can better control the rate at which fake reviews are generated and posted We assume that the agent has access to public reviews on the review platform by which it can train its generative model We also assume that it is easy for the agent to create a large number of accounts on the review platform so that accountbased detection or ratelimiting techniques are ineffective against fake reviews The quality of the generative model plays a crucial role in the attack Yao et al BIBREF0 propose the use of a characterbased LSTM as base for generative model LSTMs are not conditioned to generate reviews for a specific target BIBREF1 and may mixup concepts from different contexts during freeform generation Mixing contextually separate words is one of the key criteria that humans use to identify fake reviews These may result in violations of known indicators for fake content BIBREF18 For example the review content may not match prior expectations nor the information need that the reader has We improve the attack model by considering a more capable generative model that produces more appropriate reviews a neural machine translation NMT model We propose the use of NMT models for fake review generation The method has several benefits 1 the ability to learn how to associate context keywords to reviews 2 fast training time and 3 a highdegree of customization during production time eg introduction of specific waiter or food items names into reviews NMT models are constructions of stacked recurrent neural networks RNNs They include an encoder network and a decoder network which are jointly optimized to produce a translation of one sequence to another The encoder rolls over the input data in sequence and produces one INLINEFORM0 dimensional context vector representation for the sentence The decoder then generates output sequences based on the embedding vector and an attention module which is taught to associate output words with certain input words The generation typically continues until a specific EOS end of sentence token is encountered The review length can be controlled in many ways eg by setting the probability of generating the EOS token to zero until the required length is reached NMT models often also include a beam search BIBREF14 which generates several hypotheses and chooses the best ones amongst them In our work we use the greedy beam search technique We forgo the use of additional beam searches as we found that the quality of the output was already adequate and the translation phase time consumption increases linearly for each beam used We use the Yelp Challenge dataset BIBREF2 for our fake review generation The dataset Aug 2017 contains 29 million 1 5 star restaurant reviews We treat all reviews as genuine humanwritten reviews for the purpose of this work since widescale deployment of machinegenerated review attacks are not yet reported Sep 2017 BIBREF19 As preprocessing we remove nonprintable nonASCII characters and excessive whitespace We separate punctuation from words We reserve 15000 reviews for validation and 3000 for testing and the rest we use for training NMT models require a parallel corpus of source and target sentences ie a large set of source targetpairs We set up a parallel corpus by constructing context reviewpairs from the dataset Next we describe how we created our input context The Yelp Challenge dataset includes metadata about restaurants including their names food tags cities and states these restaurants are located in For each restaurant review we fetch this metadata and use it as our input context in the NMT model The corresponding restaurant review is similarly set as the target sentence This method produced 29 million pairs of sentences in our parallel corpus We show one example of the parallel training corpus in Example 1 below 5 Public House Las Vegas NV Gastropubs Restaurants Excellent food and service Pricey but well worth it I would recommend the bone marrow and sampler platter for appetizers endverbatim noindent The order textbfrating name city state tags is kept constant Training the model conditions it to associate certain sequences of words in the input sentence with others in the output subsubsectionTraining Settings We train our NMT model on a commodity PC with a i74790k CPU 400GHz with 32GB RAM and one NVidia GeForce GTX 980 GPU Our system can process approximately 1300 textendash 1500 source tokenss and approximately 5730 textendash 5830 output tokenss Training one epoch takes in average 72 minutes The model is trained for 8 epochs ie over night We call fake review generated by this model emphNMTFake reviews We only need to train one model to produce reviews of different ratings We use the training settings adam optimizer citekingma2014adam with the suggested learning rate 0001 citeklein2017opennmt For most parts parameters are at their default values Notably the maximum sentence length of input and output is 50 tokens by default We leverage the framework openNMTpy citeklein2017opennmt to teach the our NMT model We list used openNMTpy commands in Appendix TablereftableopenNMTpycommands beginfiguret begincenter begintabular l hline Example 2 Greedy NMT Great food underlinegreat service underlinegreat textittextitbeer selection I had the textitGastropubs burger and it was delicious The underlinetextitbeer selection was also underlinegreat Example 3 NMTFake I love this restaurant Great food great service Its textita little pricy but worth it for the textitquality of the textitbeer and atmosphere you can see in textitVegas hline endtabular labeltableoutputcomparison endcenter captionNaive text generation with NMT vs generation using our NTM model Repetitive patterns are underlineunderlined Contextual words are emphitalicized Both examples here are generated based on the context given in Example1 labelfigcomparison endfigure subsectionControlling generation of fake reviews labelsecgenerating Greedy NMT beam searches are practical in many NMT cases However the results are simply repetitive when naively applied to fake review generation See Example2 in Figurereffigcomparison The NMT model produces many emphhighconfidence word predictions which are repetitive and obviously fake We calculated that in fact 43 of the generated sentences started with the phrase Great food The lack of diversity in greedy use of NMTs for text generation is clear beginalgorithmb KwDataDesired review context Cmathrminput given as cleartext NMT model KwResultGenerated review out for input context Cmathrminput set b03 lambda5 alphafrac23 pmathrmtypo pmathrmspell log p leftarrow textNMTdecodeNMTencodeCmathrminputtext out leftarrow i leftarrow 0 log p leftarrow textAugmentlog p b lambda 1 0 random penalty Whilei0 or oi not EOS log Tildep leftarrow textAugmentlog p b lambda alpha oi i start memory penalty oi leftarrow textNMTbeamlog Tildep out outappendoi i leftarrow i1 textreturntextObfuscateoutpmathrmtypopmathrmspell captionGeneration of NMTFake reviews labelalgbase endalgorithm In this work we describe how we succeeded in creating more diverse and less repetitive generated reviews such as Example 3 in Figurereffigcomparison We outline pseudocode for our methodology of generating fake reviews in Algorithmrefalgbase There are several parameters in our algorithm The details of the algorithm will be shown later We modify the openNMTpy translation phase by changing logprobabilities before passing them to the beam search We notice that reviews generated with openNMTpy contain almost no language errors As an optional postprocessing step we obfuscate reviews by introducing natural typosmisspellings randomly In the next sections we describe how we succeeded in generating more natural sentences from our NMT model ie generating reviews like Example3 instead of reviews like Example2 subsubsectionVariation in word content Example 2 in Figurereffigcomparison repeats commonly occurring words given for a specific context eg textitgreat food service beer selection burger for Example1 Generic review generation can be avoided by decreasing probabilities loglikelihoods citemurphy2012machine of the generators LM the decoder We constrain the generation of sentences by randomly emphimposing penalties to words We tried several forms of added randomness and found that adding constant penalties to a emphrandom subset of the target words resulted in the most natural sentence flow We call these penalties emphBernoulli penalties since the random variables are chosen as either 1 or 0 on or off paragraphBernoulli penalties to language model To avoid generic sentences components we augment the default language model pcdot of the decoder by beginequation log Tildeptk log ptk ti dots t1 lambda q endequation where q in RV is a vector of Bernoullidistributed random values that obtain values 1 with probability b and value 0 with probability 1bi and lambda 0 Parameter b controls how much of the vocabulary is forgotten and lambda is a soft penalty of including forgotten words in a review lambda qk emphasizes sentence forming with nonpenalized words The randomness is reset at the start of generating a new review Using Bernoulli penalties in the language model we can forget a certain proportion of words and essentially force the creation of less typical sentences We will test the effect of these two parameters the Bernoulli probability b and loglikelihood penalty of including forgotten words lambda with a user study in Sectionrefsecvarying paragraphStart penalty We introduce start penalties to avoid generic sentence starts eg Great food great service Inspired by citeli2016diversity we add a random start penalty lambda smathrmi to our language model which decreases monotonically for each generated token We set alpha leftarrow 066 as its effect decreases by 90 every 5 words generated paragraphPenalty for reusing words Bernoulli penalties do not prevent excessive use of certain words in a sentence such as textitgreat in Example2 To avoid excessive reuse of words we included a memory penalty for previously used words in each translation Concretely we add the penalty lambda to each word that has been generated by the greedy search subsubsectionImproving sentence coherence labelsecgrammar We visually analyzed reviews after applying these penalties to our NMT model While the models were clearly diverse they were emphincoherent the introduction of random penalties had degraded the grammaticality of the sentences Amongst others the use of punctuation was erratic and pronouns were used semantically wrongly eg emphhe emphshe might be replaced as could andbut To improve the authenticity of our reviews we added several emphgrammarbased rules English language has several classes of words which are important for the natural flow of sentences We built a list of common pronouns eg I them our conjunctions eg and thus if punctuation eg and apply only half memory penalties for these words We found that this change made the reviews more coherent The pseudocode for this and the previous step is shown in Algorithmrefalgaug The combined effect of grammarbased rules and LM augmentation is visible in Example3 Figurereffigcomparison beginalgorithmt KwDataInitial log LM log p Bernoulli probability b softpenalty lambda monotonic factor alpha last generated token oi grammar rules set G KwResultAugmented log LM log Tildep beginalgorithmic1 Procedure Augmentlog p b lambda alpha oi i generate Pmathrm1N leftarrow BernoullibtextOne value in 01textper token I leftarrow P0 Select positive indices log Tildep leftarrow textDiscountlog p I lambda cdot alphaiG start penalty log Tildep leftarrow textDiscountlog Tildep oi lambdaG memory penalty textbfreturnlog Tildep EndProcedure Procedure Discountlog p I lambda G StateFori in I eIfoi in G log pi leftarrow log pi lambda2 log pi leftarrow log pi lambda textbfreturnlog p EndProcedure endalgorithmic captionPseudocode for augmenting language model labelalgaug endalgorithm subsubsectionHumanlike errors labelsecobfuscation We notice that our NMT model produces reviews without grammar mistakes This is unlike real human writers whose sentences contain two types of language mistakes 1 emphtypos that are caused by mistakes in the human motoric input and 2 emphcommon spelling mistakes We scraped a list of common English language spelling mistakes from Oxford dictionaryfootnoteurlhttpsenoxforddictionariescomspellingcommonmisspellings and created 80 rules for randomly emphreintroducing spelling mistakes Similarly typos are randomly reintroduced based on the weighted edit distancefootnoteurlhttpspypipythonorgpypiweightedlevenshtein01 such that typos resulting in real English words with small perturbations are emphasized We use autocorrection toolsfootnoteurlhttpspypipythonorgpypiautocorrect010 for finding these words We call these augmentations emphobfuscations since they aim to confound the reader to think a human has written them We omit the pseudocode description for brevity subsectionExperiment Varying generation parameters in our NMT model labelsecvarying Parameters b and lambda control different aspects in fake reviews We show six different examples of generated fake reviews in Tablereftablecategories Here the largest differences occur with increasing values of b visibly the restaurant reviews become more extreme This occurs because a large portion of vocabulary is forgotten Reviews with b geq 07 contain more rare word combinations eg as punctuation and they occasionally break grammaticality experience was awesome Reviews with lower b are more generic they contain safe word combinations like Great place good service that occur in many reviews Parameter lambdas is more subtle it affects how random review starts are and to a degree the discontinuation between statements within the review We conducted an Amazon Mechanical Turk MTurk survey in order to determine what kind of NMTFake reviews are convincing to native English speakers We describe the survey and results in the next section begintableb captionSix different parametrizations of our NMT reviews and one example for each The context is 5 PFChang s Scottsdale AZ in all examples begincenter begintabular l l hline b lambda Example review for context hline hline 03 3 I love this location Great service great food and the best drinks in Scottsdale The staff is very friendly and always remembers u when we come inhline 03 5 Love love the food here I always go for lunch They have a great menu and they make it fresh to order Great place good service and nice staffhline 05 4 I love their chicken lettuce wraps and fried rice The service is good they are always so polite They have great happy hour specials and they have a lot of optionshline 07 3 Great place to go with friends They always make sure your dining experience was awesome hline 07 5 Still havent ordered an entree before but today we tried them once both of us love this restauranthline 09 4 AMAZING Food was awesome with excellent service Loved the lettuce wraps Great drinks and wine Cant wait to go back so soon hline endtabular labeltablecategories endcenter endtable subsubsectionMTurk study labelsecamt We created 20 jobs each with 100 questions and requested master workers in MTurk to complete the jobs We randomly generated each survey for the participants Each review had a 50 chance to be real or fake The fake ones further were chosen among six 6 categories of fake reviews Tablereftablecategories The restaurant and the city was given as contextual information to the participants Our aim was to use this survey to understand how well Englishspeakers react to different parametrizations of NMTFake reviews Tablereftableamtpop in Appendix summarizes the statistics for respondents in the survey All participants were native English speakers from America The base rate 50 was revealed to the participants prior to the study We first investigated overall detection of any NMTFake reviews 1006 fake reviews and 994 real reviews We found that the participants had big difficulties in detecting our fake reviews In average the reviews were detected with classaveraged emphFscore of only 56 with 53 Fscore for fake review detection and 59 Fscore for real review detection The results are very close to emphrandom detection where precision recall and Fscore would each be 50 Results are recorded in TablereftableMTurksuper Overall the fake review generation is very successful since human detection rate across categories is close to random begintablet captionEffectiveness of Mechanical Turkers in distinguishing humanwritten reviews from fake reviews generated by our NMT model all variants begincenter begintabular c c c c c hline multicolumn5cClassification report hline Review Type Precision Recall Fscore Support hline hline Human 55 63 59 994 NMTFake 57 50 53 1006 hline endtabular labeltableMTurksuper endcenter endtable We noticed some variation in the detection of different fake review categories The respondents in our MTurk survey had most difficulties recognizing reviews of category b03 lambda5 where true positive rate was 404 while the true negative rate of the real class was 627 The precision were 16 and 86 respectively The classaveraged Fscore is 476 which is close to random Detailed classification reports are shown in TablereftableMTurksub in Appendix Our MTurkstudy shows that emphour NMTFake reviews pose a significant threat to review systems since emphordinary native Englishspeakers have very big difficulties in separating real reviews from fake reviews We use the review category b03 lambda5 for future user tests in this paper since MTurk participants had most difficulties detecting these reviews We refer to this category as NMTFake in this paper sectionEvaluation graphicspath figures We evaluate our fake reviews by first comparing them statistically to previously proposed types of fake reviews and proceed with a user study with experienced participants We demonstrate the statistical difference to existing fake review types citeyao2017automatedmukherjee2013yelprayana2015collective by training classifiers to detect previous types and investigate classification performance subsectionReplication of stateoftheart model LSTM labelsecrepl Yao et al citeyao2017automated presented the current stateoftheart generative model for fake reviews The model is trained over the Yelp Challenge dataset using a twolayer characterbased LSTM model We requested the authors of citeyao2017automated for access to their LSTM model or a fake review dataset generated by their model Unfortunately they were not able to share either of these with us We therefore replicated their model as closely as we could based on their paper and email correspondencefootnoteWe are committed to sharing our code with bonafide researchers for the sake of reproducibility We used the same graphics card GeForce GTX and trained using the same framework torchRNN in lua We downloaded the reviews from Yelp Challenge and preprocessed the data to only contain printable ASCII characters and filtered out nonrestaurant reviews We trained the model for approximately 72 hours We postprocessed the reviews using the customization methodology described in citeyao2017automated and email correspondence We call fake reviews generated by this model LSTMFake reviews subsectionSimilarity to existing fake reviews labelsecautomated We now want to understand how NMTFake reviews compare to a LSTM fake reviews and b humangenerated fake reviews We do this by comparing the statistical similarity between these classes For a Figurereffiglstm we use the Yelp Challenge dataset We trained a classifier using 5000 random reviews from the Yelp Challenge dataset human and 5000 fake reviews generated by LSTMFake Yao et al citeyao2017automated found that character features are essential in identifying LSTMFake reviews Consequently we use character features ngrams up to 3 For b Figurereffigshillwe the Yelp Shills dataset combination of YelpZip citemukherjee2013yelp YelpNYC citemukherjee2013yelp YelpChi citerayana2015collective This dataset labels entries that are identified as fraudulent by Yelps filtering mechanism shill reviewsfootnoteNote that shill reviews are probably generated by human shills citezhao2017news The rest are treated as genuine reviews from human users genuine We use 100000 reviews from each category to train a classifier We use features from the commercial psychometric tool LIWC2015 citepennebaker2015development to generated features In both cases we use AdaBoost with 200 shallow decision trees for training For testing each classifier we use a held out test set of 1000 reviews from both classes in each case In addition we test 1000 NMTFake reviews Figuresreffiglstm andreffigshill show the results The classification threshold of 50 is marked with a dashed line beginfigure beginsubfigureb05columnwidth includegraphicswidthcolumnwidthfigureslstmpng captionHumanLSTM reviews labelfiglstm endsubfigure beginsubfigureb05columnwidth includegraphicswidthcolumnwidthfiguresdistributionshillpng captionGenuineShill reviews labelfigshill endsubfigure caption Histogram comparison of NMTFake reviews with LSTMFake reviews and humangenerated emphgenuine and emphshill reviews Figurereffiglstm shows that a classifier trained to distinguish human vs LSTMFake cannot distinguish human vs NMTFake reviews Figurereffigshill shows NMTFake reviews are more similar to emphgenuine reviews than emphshill reviews labelfigstatisticalsimilarity endfigure We can see that our new generated reviews do not share strong attributes with previous known categories of fake reviews If anything our fake reviews are more similar to genuine reviews than previous fake reviews We thus conjecture that our NMTFake fake reviews present a category of fake reviews that may go undetected on online review sites subsectionComparative user study labelseccomparison We wanted to evaluate the effectiveness of fake reviews againsttechsavvy users who understand and know to expect machinegenerated fake reviews We conducted a user study with 20 participants all with computer science education and at least one university degree Participant demographics are shown in Tablereftableamtpop in the Appendix Each participant first attended a training session where they were asked to label reviews fake and genuine and could later compare them to the correct answers we call these participants emphexperienced participants No personal data was collected during the user study Each person was given two randomly selected sets of 30 of reviews a total of 60 reviews per person with reviews containing 10 textendash 50 words each Each set contained 26 87 real reviews from Yelp and 4 13 machinegenerated reviews numbers chosen based on suspicious review prevalence on Yelpcitemukherjee2013yelprayana2015collective One set contained machinegenerated reviews from one of the two models NMT b03 lambda5 or LSTM and the other set reviews from the other in randomized order The number of fake reviews was revealed to each participant in the study description Each participant was requested to mark four 4 reviews as fake Each review targeted a real restaurant A screenshot of that restaurants Yelp page was shown to each participant prior to the study Each participant evaluated reviews for one specific randomly selected restaurant An example of the first page of the user study is shown in Figurereffigscreenshot in Appendix beginfigureht centering includegraphicswidth7columnwidthdetection2png captionViolin plots of detection rate in comparative study Mean and standard deviations for number of detected fakes are 08pm07 for NMTFake and 25pm10 for LSTMFake n20 A sample of random detection is shown as comparison labelfigaalto endfigure Figurereffigaalto shows the distribution of detected reviews of both types A hypothetical random detector is shown for comparison NMTFake reviews are significantly more difficult to detect for our experienced participants In average detection rate recall is 20 for NMTFake reviews compared to 61 for LSTMbased reviews The precision and Fscore is the same as the recall in our study since participants labeled 4 fakes in each set of 30 reviews citemurphy2012machine The distribution of the detection across participants is shown in Figurereffigaalto emphThe difference is statistically significant with confidence level 99 Welchs ttest We compared the detection rate of NMTFake reviews to a random detector and find that emphour participants detection rate of NMTFake reviews is not statistically different from random predictions with 95 confidence level Welchs ttest sectionDefenses labelsecdetection We developed an AdaBoostbased classifier to detect our new fake reviews consisting of 200 shallow decision trees depth 2 The features we used are recorded in Tablereftablefeaturesadaboost Appendix We used wordlevel features based on spaCytokenization citehonnibaljohnson2015EMNLP and constructed ngram representation of POStags and dependency tree tags We added readability features from NLTKcitebird2004nltk beginfigureht centering includegraphicswidth7columnwidthobfscorefair2png caption Adaboostbased classification of NMTFake and humanwritten reviews Effect of varying b and lambda in fake review generation The variant native speakers had most difficulties detecting is well detectable by AdaBoost 97 labelfigadaboostmatrixblambda endfigure Figurereffigadaboostmatrixblambda shows our AdaBoost classifiers classaveraged Fscore at detecting different kind of fake reviews The classifier is very effective in detecting reviews that humans have difficulties detecting For example the fake reviews MTurk users had most difficulty detecting b03 lambda5 are detected with an excellent 97 Fscore The most important features for the classification were counts for frequently occurring words in fake reviews such as punctuation pronouns articles as well as the readability feature Automated Readability Index We thus conclude that while NMTFake reviews are difficult to detect for humans they can be well detected with the right tools sectionRelated Work Kumar and Shahcitekumar2018false survey and categorize false information research Automatically generated fake reviews are a form of emphopinionbased false information where the creator of the review may influence readers opinions or decisions Yao et al citeyao2017automated presented their study on machinegenerated fake reviews Contrary to us they investigated characterlevel language models without specifying a specific context before generation We leverage existing NMT tools to encode a specific context to the restaurant before generating reviews Supporting our study Everett et alciteEverett2016Automated found that security researchers were less likely to be fooled by Markov chaingenerated Reddit comments compared to ordinary Internet users Diversification of NMT model outputs has been studied in citeli2016diversity The authors proposed the use of a penalty to commonly occurring sentences emphngrams in order to emphasize maximum mutual informationbased generation The authors investigated the use of NMT models in chatbot systems We found that unigram penalties to random tokens Algorithmrefalgaug was easy to implement and produced sufficiently diverse responses section Discussion and Future Work paragraphWhat makes NMTFake reviews difficult to detect First NMT models allow the encoding of a relevant context for each review which narrows down the possible choices of words that the model has to choose from Our NMT model had a perplexity of approximately 25 while the model of citeyao2017automated had a perplexity of approximately 90 footnotePersonal communication with the authors Second the beam search in NMT models narrows down choices to naturallooking sentences Third we observed that the NMT model produced emphbetter structure in the generated sentences ie a more coherent story paragraphCost of generating reviews With our setup generating one review took less than one second The cost of generation stems mainly from the overnight training Assuming an electricity cost of 16 cents kWh California and 8 hours of training training the NMT model requires approximately 130 USD This is a 90 reduction in time compared to the stateoftheart citeyao2017automated Furthermore it is possible to generate both positive and negative reviews with the same model paragraphEase of customization We experimented with inserting specific words into the text by increasing their log likelihoods in the beam search We noticed that the success depended on the prevalence of the word in the training set For example adding a 5 to emphMike in the loglikelihood resulted in approximately 10 prevalence of this word in the reviews An attacker can therefore easily insert specific keywords to reviews which can increase evasion probability paragraphEase of testing Our diversification scheme is applicable during emphgeneration phase and does not affect the training setup of the network in any way Once the NMT model is obtained it is easy to obtain several different variants of NMTFake reviews by varying parameters b and lambda paragraphLanguages The generation methodology is not perse languagedependent The requirement for successful generation is that sufficiently much data exists in the targeted language However our language model modifications require some knowledge of that target languages grammar to produce highquality reviews paragraphGeneralizability of detection techniques Currently fake reviews are not universally detectable Our results highlight that it is difficult to claim detection performance on unseen types of fake reviews Sectionrefsecautomated We see this an open problem that deserves more attention in fake reviews research paragraphGeneralizability to other types of datasets Our technique can be applied to any dataset as long as there is sufficient training data for the NMT model We used approximately 29 million reviews for this work sectionConclusion In this paper we showed that neural machine translation models can be used to generate fake reviews that are very effective in deceiving even experienced techsavvy users This supports anecdotal evidence citenational2017commission Our technique is more effective than stateoftheart citeyao2017automated We conclude that machineaided fake review detection is necessary since human users are ineffective in identifying fake reviews We also showed that detectors trained using one type of fake reviews are not effective in identifying other types of fake reviews Robust detection of fake reviews is thus still an open problem sectionAcknowledgments We thank Tommi Grondahl for assistance in planning user studies and the participants of the user study for their time and feedback We also thank Luiza Sayfullina for comments that improved the manuscript We thank the authors of citeyao2017automated for answering questions about their work bibliographystylesplncs beginthebibliography10 bibitemyao2017automated Yao Y Viswanath B Cryan J Zheng H Zhao BY newblock Automated crowdturfing attacks and defenses in online review systems newblock In Proceedings of the 2017 ACM SIGSAC Conference on Computer and Communications Security ACM 2017 bibitemmurphy2012machine Murphy K newblock Machine learning a probabilistic approach newblock Massachusetts Institute of Technology 2012 bibitemchallenge2013yelp Yelp newblock Yelp Challenge Dataset 2013 bibitemmukherjee2013yelp Mukherjee A Venkataraman V Liu B Glance N newblock What yelp fake review filter might be doing newblock In Seventh International AAAI Conference on Weblogs and Social Media ICWSM 2013 bibitemrayana2015collective Rayana S Akoglu L newblock Collective opinion spam detection Bridging review networks and metadata newblock In Proceedings of the 21th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining bibitemo2008user OConnor P newblock Usergenerated content and travel A case study on Tripadvisorcom newblock Information and communication technologies in tourism 2008 2008 bibitemluca2010reviews Luca M newblock Reviews Reputation and Revenue The Case of Yelp com newblock Harvard Business School 2010 bibitemwang2012serf Wang G Wilson C Zhao X Zhu Y Mohanlal M Zheng H Zhao BY newblock Serf and turf crowdturfing for fun and profit newblock In Proceedings of the 21st international conference on World Wide Web WWW ACM 2012 bibitemrinta2017understanding RintaKahila T Soliman W newblock Understanding crowdturfing The different ethical logics behind the clandestine industry of deception newblock In ECIS 2017 Proceedings of the 25th European Conference on Information Systems 2017 bibitemluca2016fake Luca M Zervas G newblock Fake it till you make it Reputation competition and yelp review fraud newblock Management Science 2016 bibitemnational2017commission National Literacy Trust newblock Commission on fake news and the teaching of critical literacy skills in schools URL urlhttpsliteracytrustorgukpolicyandcampaignsallpartyparliamentarygroupliteracyfakenews bibitemjurafsky2014speech Jurafsky D Martin JH newblock Speech and language processing Volume3 newblock Pearson London 2014 bibitemkingma2014adam Kingma DP Ba J newblock Adam A method for stochastic optimization newblock arXiv preprint arXiv14126980 2014 bibitemcho2014learning Cho K van Merrienboer B Gulcehre C Bahdanau D Bougares F Schwenk H Bengio Y newblock Learning phrase representations using rnn encoderdecoder for statistical machine translation newblock In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing EMNLP 2014 bibitemklein2017opennmt Klein G Kim Y Deng Y Senellart J Rush A newblock Opennmt Opensource toolkit for neural machine translation newblock Proceedings of ACL System Demonstrations 2017 bibitemwu2016google Wu Y Schuster M Chen Z Le QV Norouzi M Macherey W Krikun M Cao Y Gao Q Macherey K etal newblock Googles neural machine translation system Bridging the gap between human and machine translation newblock arXiv preprint arXiv160908144 2016 bibitemmei2017coherent Mei H Bansal M Walter MR newblock Coherent dialogue with attentionbased language models newblock In AAAI 2017 32523258 bibitemli2016diversity Li J Galley M Brockett C Gao J Dolan B newblock A diversitypromoting objective function for neural conversation models newblock In Proceedings of NAACLHLT 2016 bibitemrubin2006assessing Rubin VL Liddy ED newblock Assessing credibility of weblogs newblock In AAAI Spring Symposium Computational Approaches to Analyzing Weblogs 2006 bibitemzhao2017news newscomau newblock The potential of AI generated crowdturfing could undermine online reviews and dramatically erode public trust URL urlhttpwwwnewscomautechnologyonlinesecuritythepotentialofaigeneratedcrowdturfingcouldundermineonlinereviewsanddramaticallyerodepublictrustnewsstorye1c84ad909b586f8a08238d5f80b6982 bibitempennebaker2015development Pennebaker JW Boyd RL Jordan K Blackburn K newblock The development and psychometric properties of LIWC2015 newblock Technical report 2015 bibitemhonnibaljohnson2015EMNLP Honnibal M Johnson M newblock An improved nonmonotonic transition system for dependency parsing newblock In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing EMNLP ACM 2015 bibitembird2004nltk Bird S Loper E newblock NLTK the natural language toolkit newblock In Proceedings of the ACL 2004 on Interactive poster and demonstration sessions Association for Computational Linguistics 2004 bibitemkumar2018false Kumar S Shah N newblock False information on web and social media A survey newblock arXiv preprint arXiv180408559 2018 bibitemEverett2016Automated Everett RM Nurse JRC Erola A newblock The anatomy of online deception What makes automated text convincing newblock In Proceedings of the 31st Annual ACM Symposium on Applied Computing SAC 16 ACM 2016 endthebibliography sectionAppendix We present basic demographics of our MTurk study and the comparative study with experienced users in Tablereftableamtpop begintable captionUser study statistics begincenter begintabular l c c hline Quality Mechanical Turk users Experienced users hline Native English Speaker Yes 20 Yes 1 No 19 Fluent in English Yes 20 Yes 20 Age 2140 17 4160 3 2125 8 2630 7 3135 4 4145 1 Gender Male 14 Female 6 Male 17 Female 3 Highest Education High School 10 Bachelor 10 Bachelor 9 Master 6 PhD 5 hline endtabular labeltableamtpop endcenter endtable TablereftableopenNMTpycommands shows a listing of the openNMTpy commands we used to create our NMT model and to generate fake reviews begintablet captionListing of used openNMTpy commands begincenter begintabular l l hline Phase Bash command hline Preprocessing beginlstlistinglanguagebash python preprocesspy trainsrc contexttraintxt traintgt reviewstraintxt validsrc contextvaltxt validtgt reviewsvaltxt savedata model lower tgtwordsminfrequency 10 endlstlisting Training beginlstlistinglanguagebash python trainpy data model savemodel model epochs 8 gpuid 0 learningratedecay 05 optim adam learningrate 0001 startdecayat 3endlstlisting Generation beginlstlistinglanguagebash python translatepy model modelacc3554ppl2568e8pt src contexttsttxt output prede8txt replaceunk verbose maxlength 50 gpu 0 endlstlisting hline endtabular labeltableopenNMTpycommands endcenter endtable TablereftableMTurksub shows the classification performance of Amazon Mechanical Turkers separated across different categories of NMTFake reviews The category with best performance b03 lambda5 is denoted as NMTFake begintableb captionMTurk study subclass classification reports Classes are imbalanced in ratio 16 Random predictions are pmathrmhuman 86 and pmathrmmachine 14 with rmathrmhuman rmathrmmachine 50 Classaveraged Fscores for random predictions are 42 begincenter begintabular c c c c c hline b03 lambda 3 Precision Recall Fscore Support hline Human 89 63 73 994 NMTFake 15 45 22 146 hline hline b03 lambda 5 Precision Recall Fscore Support hline Human 86 63 73 994 NMTFake 16 40 23 171 hline hline b05 lambda 4 Precision Recall Fscore Support hline Human 88 63 73 994 NMTFake 21 55 30 181 hline hline b07 lambda 3 Precision Recall Fscore Support hline Human 88 63 73 994 NMTFake 19 50 27 170 hline hline b07 lambda 5 Precision Recall Fscore Support hline Human 89 63 74 994 NMTFake 21 57 31 174 hline hline b09 lambda 4 Precision Recall Fscore Support hline Human 88 63 73 994 NMTFake 18 50 27 164 hline endtabular labeltableMTurksub endcenter endtable Figurereffigscreenshot shows screenshots of the first two pages of our user study with experienced participants beginfigureht centering includegraphicswidth1columnwidthfiguresscreenshot73png caption Screenshots of the first two pages in the user study Example 1 is a NMTFake review the rest are humanwritten labelfigscreenshot endfigure Tablereftablefeaturesadaboost shows the features used to detect NMTFake reviews using the AdaBoost classifier begintable captionFeatures used in NMTFake review detector begincenter begintabular l c hline Feature type Number of features hline hline Readability features 13 hline Unique POS tags 20 hline Word unigrams 22831 hline 1234grams of simple partofspeech tags 54240 hline 123grams of detailed partofspeech tags 112944 hline 123grams of syntactic dependency tags 93195 hline endtabular labeltablefeaturesadaboost endcenter endtable enddocument </s>
<s>[INST] Saliency Maps Generation for Automatic Text Summarization [/INST] Ever since the LIME algorithm BIBREF0 explanation techniques focusing on finding the importance of input features in regard of a specific prediction have soared and we now have many ways of finding saliency maps also called heatmaps because of the way we like to visualize them We are interested in this paper by the use of such a technique in an extreme task that highlights questions about the validity and evaluation of the approach We would like to first set the vocabulary we will use We agree that saliency maps are not explanations in themselves and that they are more similar to attribution which is only one part of the human explanation process BIBREF1 We will prefer to call this importance mapping of the input an attribution rather than an explanation We will talk about the importance of the input relevance score in regard to the models computation and not make allusion to any human understanding of the model as a result There exist multiple ways to generate saliency maps over the input for nonlinear classifiers BIBREF2 BIBREF3 BIBREF4 We refer the reader to BIBREF5 for a survey of explainable AI in general We use in this paper LayerWise Relevance Propagation LRP BIBREF2 which aims at redistributing the value of the classifying function on the input to obtain the importance attribution It was first created to explain the classification of neural networks on image recognition tasks It was later successfully applied to text using convolutional neural networks CNN BIBREF6 and then LongShort Term Memory LSTM networks for sentiment analysis BIBREF7 Our goal in this paper is to test the limits of the use of such a technique for more complex tasks where the notion of input importance might not be as simple as in topic classification or sentiment analysis We changed from a classification task to a generative task and chose a more complex one than text translation in which we can easily find a word to word correspondenceimportance between input and output We chose text summarization We consider abstractive and informative text summarization meaning that we write a summary in our own words and retain the important information of the original text We refer the reader to BIBREF8 for more details on the task and the different variants that exist Since the success of deep sequencetosequence models for text translation BIBREF9 the same approaches have been applied to text summarization tasks BIBREF10 BIBREF11 BIBREF12 which use architectures on which we can apply LRP We obtain one saliency map for each word in the generated summaries supposed to represent the use of the input features for each element of the output sequence We observe that all the saliency maps for a text are nearly identical and decorrelated with the attention distribution We propose a way to check their validity by creating what could be seen as a counterfactual experiment from a synthesis of the saliency maps using the same technique as in Arras et al Arras2017 We show that in some but not all cases they help identify the important input features and that we need to rigorously check importance attributions before trusting them regardless of whether or not the mapping makes sense to us We finally argue that in the process of identifying the important input features verifying the saliency maps is as important as the generation step if not more We present in this section the baseline model from See et al See2017 trained on the CNNDaily Mail dataset We reproduce the results from See et al See2017 to then apply LRP on it The CNNDaily mail dataset BIBREF12 is a text summarization dataset adapted from the Deepmind questionanswering dataset BIBREF13 It contains around three hundred thousand news articles coupled with summaries of about three sentences These summaries are in fact highlights of the articles provided by the media themselves Articles have an average length of 780 words and the summaries of 50 words We had 287 000 training pairs and 11 500 test pairs Similarly to See et al See2017 we limit during training and prediction the input text to 400 words and generate summaries of 200 words We pad the shorter texts using an UNKNOWN token and truncate the longer texts We embed the texts and summaries using a vocabulary of size 50 000 thus recreating the same parameters as See et al See2017 The baseline model is a deep sequencetosequence encoderdecoder model with attention The encoder is a bidirectional LongShort Term MemoryLSTM cell BIBREF14 and the decoder a single LSTM cell with attention mechanism The attention mechanism is computed as in BIBREF9 and we use a greedy search for decoding We train endtoend including the words embeddings The embedding size used is of 128 and the hidden state size of the LSTM cells is of 254 We train the 21 350 992 parameters of the network for about 60 epochs until we achieve results that are qualitatively equivalent to the results of See et al See2017 We obtain summaries that are broadly relevant to the text but do not match the target summaries very well We observe the same problems such as wrong reproduction of factual details replacing rare words with more common alternatives or repeating nonsense after the third sentence We can see in Figure 1 an example of summary obtained compared to the target one The summaries we generate are far from being valid summaries of the information in the texts but are sufficient to look at the attribution that LRP will give us They pick up the general subject of the original text We present in this section the LayerWise Relevance Propagation LRP BIBREF2 technique that we used to attribute importance to the input features together with how we adapted it to our model and how we generated the saliency maps LRP redistributes the output of the model from the output layer to the input by transmitting information backwards through the layers We call this propagated backwards importance the relevance LRP has the particularity to attribute negative and positive relevance a positive relevance is supposed to represent evidence that led to the classifiers result while negative relevance represents evidence that participated negatively in the prediction We initialize the relevance of the output layer to the value of the predicted class before softmax and we then describe locally the propagation backwards of the relevance from layer to layer For normal neural network layers we use the form of LRP with epsilon stabilizer BIBREF2 We write down Rileftarrow jl l1 the relevance received by the neuron i of layer l from the neuron j of layer l1 beginsplit
Rileftarrow jl l1 dfracwirightarrow jll1textbf zli dfracepsilon textrm signtextbf zl1j textbf bl1jDltextbf zl1j epsilon textrm signtextbf zl1j Rjl1
endsplit Eq 7 where wirightarrow jll1 is the networks weight parameter set during training textbf bl1j is the bias for neuron j of layer l1 textbf zli is the activation of neuron i on layer l epsilon is the stabilizing term set to 000001 and Dl is the dimension of the l th layer The relevance of a neuron is then computed as the sum of the relevance he received from the above layers For LSTM cells we use the method from Arras et alArras2017 to solve the problem posed by the elementwise multiplications of vectors Arras et al noted that when such computation happened inside an LSTM cell it always involved a gate vector and another vector containing information The gate vector containing only value between 0 and 1 is essentially filtering the second vector to allow the passing of relevant information Considering this when we propagate relevance through an elementwise multiplication operation we give all the upperlayers relevance to the information vector and none to the gate vector We use the same method to transmit relevance through the attention mechanism back to the encoder because Bahdanaus attention BIBREF9 uses elementwise multiplications as well We depict in Figure 2 the transmission endtoend from the output layer to the input through the decoder attention mechanism and then the bidirectional encoder We then sum up the relevance on the word embedding to get the tokens relevance as Arras et al Arras2017 The way we generate saliency maps differs a bit from the usual context in which LRP is used as we essentially dont have one classification but 200 one for each word in the summary We generate a relevance attribution for the 50 first words of the generated summary as after this point they often repeat themselves This means that for each text we obtain 50 different saliency maps each one supposed to represent the relevance of the input for a specific generated word in the summary In this section we present our results from extracting attributions from the sequencetosequence model trained for abstractive text summarization We first have to discuss the difference between the 50 different saliency maps we obtain and then we propose a protocol to validate the mappings The first observation that is made is that for one text the 50 saliency maps are almost identical Indeed each mapping highlights mainly the same input words with only slight variations of importance We can see in Figure 3 an example of two nearly identical attributions for two distant and unrelated words of the summary The saliency map generated using LRP is also uncorrelated with the attention distribution that participated in the generation of the output word The attention distribution changes drastically between the words in the generated summary while not impacting significantly the attribution over the input text We deleted in an experiment the relevance propagated through the attention mechanism to the encoder and didnt observe much changes in the saliency map It can be seen as evidence that using the attention distribution as an explanation of the prediction can be misleading It is not the only information received by the decoder and the importance it allocates to this attention state might be very low What seems to happen in this application is that most of the information used is transmitted from the encoder to the decoder and the attention mechanism at each decoding step just changes marginally how it is used Quantifying the difference between attention distribution and saliency map across multiple tasks is a possible future work The second observation we can make is that the saliency map doesnt seem to highlight the right things in the input for the summary it generates The saliency maps on Figure 3 correspond to the summary from Figure 1 and we dont see the word video highlighted in the input text which seems to be important for the output This allows us to question how good the saliency maps are in the sense that we question how well they actually represent the networks use of the input features We will call that truthfulness of the attribution in regard to the computation meaning that an attribution is truthful in regard to the computation if it actually highlights the important input features that the network attended to during prediction We proceed to measure the truthfulness of the attributions by validating them quantitatively We propose to validate the saliency maps in a similar way as Arras et al Arras2017 by incrementally deleting important words from the input text and observe the change in the resulting generated summaries We first define what important and unimportant input words mean across the 50 saliency maps per texts Relevance transmitted by LRP being positive or negative we average the absolute value of the relevance across the saliency maps to obtain one ranking of the most relevant words The idea is that input words with negative relevance have an impact on the resulting generated word even if it is not participating positively while a word with a relevance close to zero should not be important at all We did however also try with different methods like averaging the raw relevance or averaging a scaled absolute value where negative relevance is scaled down by a constant factor The absolute value average seemed to deliver the best results We delete incrementally the important words words with the highest average in the input and compared it to the control experiment that consists of deleting the least important word and compare the degradation of the resulting summaries We obtain mitigated results for some texts we observe a quick degradation when deleting important words which are not observed when deleting unimportant words see Figure 4 but for other test examples we dont observe a significant difference between the two settings see Figure 5 One might argue that the second summary in Figure 5 is better than the first one as it makes better sentences but as the model generates inaccurate summaries we do not wish to make such a statement This however allows us to say that the attribution generated for the text at the origin of the summaries in Figure 4 are truthful in regard to the networks computation and we may use it for further studies of the example whereas for the text at the origin of Figure 5 we shouldnt draw any further conclusions from the attribution generated One interesting point is that one saliency map didnt look better than the other meaning that there is no apparent way of determining their truthfulness in regard of the computation without doing a quantitative validation This brings us to believe that even in simpler tasks the saliency maps might make sense to us for example highlighting the animal in an image classification task without actually representing what the network really attended too or in what way We defined without saying it the counterfactual case in our experiment Would the important words in the input be deleted we would have a different summary Such counterfactuals are however more difficult to define for image classification for example where it could be applying a mask over an image or just filtering a colour or a pattern We believe that defining a counterfactual and testing it allows us to measure and evaluate the truthfulness of the attributions and thus weight how much we can trust them In this work we have implemented and applied LRP to a sequencetosequence model trained on a more complex task than usual text summarization We used previous work to solve the difficulties posed by LRP in LSTM cells and adapted the same technique for Bahdanau et al Bahdanau2014 attention mechanism We observed a peculiar behaviour of the saliency maps for the words in the output summary they are almost all identical and seem uncorrelated with the attention distribution We then proceeded to validate our attributions by averaging the absolute value of the relevance across the saliency maps We obtain a ranking of the word from the most important to the least important and proceeded to delete one or another We showed that in some cases the saliency maps are truthful to the networks computation meaning that they do highlight the input features that the network focused on But we also showed that in some cases the saliency maps seem to not capture the important input features This brought us to discuss the fact that these attributions are not sufficient by themselves and that we need to define the counterfactual case and test it to measure how truthful the saliency maps are Future work would look into the saliency maps generated by applying LRP to pointergenerator networks and compare to our current results as well as mathematically justifying the average that we did when validating our saliency maps Some additional work is also needed on the validation of the saliency maps with counterfactual tests The exploitation and evaluation of saliency map are a very important step and should not be overlooked </s>
README.md exists but content is empty. Use the Edit dataset card button to edit it.
Downloads last month
1
Edit dataset card