text
stringlengths 0
164k
|
---|
endadjustbox Eq 14 where the weight of the itextrm th word in phi e is the largest cosine similarity score between the itextrm th word embedding in mathbf Wphi e and the word embedding matrix of psi r in mathbf Wpsi r This function assigns a lower weight to words that are not relevant to the given relationship and assigns higher scores to the words that appear in the relationship or are semantically similar to the relationship For example when inferring the target of the partial triple langle Michelle Obama AlmaMater rangle MWRW will assign high weights to words like Princeton Harvard and University which include the words that describe the target of the relationship However the words that have the highest scores do not always represent the actual target but instead often represent words that are similar to the relationship name itself A counterexample is shown in Fig 2 where given the relationship spouse the word with the highest MWRW score is married Although spouse is semantically similar to married it does not answer the question posed by the partial triple Instead we call words with high MWRW weights indicator words because the correct targetwords are usually located nearby In the examplecase we can see that the correct target Barack Obama appears after the indicator word married In order to assign the correct weights to the target words we improve the content masking by using Maximal ContextRelationship Weights MCRW to adjust the weights of each word based on its context beginadjustboxmax width092 |
fwleftmathbf Wphi e mathbf Wpsi rrighti max leftfwtextrm MWRWleftmathbf Wphi e mathbf Wpsi rrightikmiright |
endadjustbox Eq 15 in which the weight of the ith word in phi e equals the maximum MWRW score of the ith word itself and previous km words From a neural network perspective the reweighting function fw can also be viewed as applying a rowwise max reduction followed by a 1D maxpooling with a window size of km on the matrix product of mathbf Wphi e and mathbf Wpsi rT To recap the relationshipdependent content masking process described here assigns importance weights to words in an entitys description based on the similarity between each words context and the given relationship After nonrelevant content is masked the model needs to learn a single embedding vector from the masked content matrix to compare with the embeddings of candidate target entities Here we describe how ConMask extracts wordbased entity embeddings We call this process the target fusion function xi which distills an embedding using the output of Eq 13 Initially we looked for solutions to this problem in recurrent neural networks RNNs of various forms Despite their popularity in NLPrelated tasks recent research has found that RNNs are not good at performing extractive tasks BIBREF25 RNNs do not work well in our specific setting because the input of the Target Fusion is a masked content matrix which means most of the stage inputs would be zero and hence hard to train In this work we decide to use fully convolutional neural network FCN as the target fusion structure A CNNbased structure is well known for its ability to capture peak values using convolution and pooling Therefore FCN is well suited to extract useful information from the weighted content matrix Our adaptation of FCNs yields the target fusion function xi which generates a k dimensional embedding using the output of content masking tau phi e psi r where e is either a head or tail entity from a partial triple Figure 3 shows the overall architecture of the target fusion process and its dependent content masking process The target fusion process has three FCN layers In each layer we first use two 1D convolution operators to perform affine transformation then we apply sigmoid as the activation function to the convoluted output followed by batch normalization BIBREF26 and maxpooling The last FCN layer uses meanpooling instead of maxpooling to ensure the output of the target fusion layer always return a single k dimensional embedding Note that the FCN used here is different from the one that typically used in computer vision tasks BIBREF27 Rather than reconstructing the input as is typical in CV the goal of target fusion is to extract the embedding wrt given relationship therefore we do not have the deconvolution operations Another difference is that we reduce the number of embeddings by half after each FCN layer but do not increase the number of channels ie the embedding size This is because the input weighted matrix is a sparse matrix with a large portion of zero values so we are essentially fusing peak values from the input matrix into a single embedding representing the target entity Although it is possible to use target fusion to generate all entity embeddings used in ConMask such a process would result in a large number of parameters Furthermore because the target fusion function is an extraction function it would be odd to apply it to entity names where no extraction is necessary So we also employ a simple semantic averaging function eta mathbf W frac1klSigma iklmathbf Wi that combines word embeddings to represent entity names and for generating background representations of other textual features where mathbf W in mathcal Rkltimes k is the input embedding matrix from the entity description phi cdot or the entity or relationship name psi cdot To recap at this point in the model we have generated entity embeddings through the content masking and target fusion operations The next step is to define a loss function that finds one or more entities in the KG that most closely match the generated embedding To speed up the training and take to advantage of the performance boost associated with a listwise ranking loss function BIBREF16 we designed a partial listwise ranking loss function that has both positive and negative target sampling mathcal Lhrtleftlbrace beginarrayll |
sum limits h in E fraclog Sh r t Ecup EE pc 05 |
sum limits t in E fraclog Sh r t Ecup EE pc le 05 |
endarrayright Eq 21 where pc is the corruption probability drawn from an uniform distribution U01 such that when pc 05 we keep the input tail entity t but do positive and negative sampling on the head entity and when pc le 05 we keep the input head entity h intact and do sampling on the tail entity E and E are the sampled positive and negative entity sets drawn from the positive and negative target distribution P and P respectively Although a typeconstraint or frequencybased distribution may yield better results here we follow the convention and simply apply a simple uniform distribution for both U010 and U011 When U012 U013 is a uniform distribution of entities in U014 and U015 is an uniform distribution of entities in U016 On the other hand when U017 U018 is an uniform distribution of entities in U019 and pc 050 is an uniform distribution of entities in pc 051 The function pc 052 in Eq 21 is the softmax normalized output of ConMask ShrtEpm leftlbrace beginarrayll |
fracexp textrm ConMaskhrtsum limits ein Epm exp textrm ConMaske r t pc 05 |
fracexp textrm ConMaskhrt sum limits ein Epm exp textrm ConMaskh r e pc le 05 |
endarrayright Eq 22 Note that Eq 21 is actually a generalized form of the sampling process used by most existing KGC models When E1 and E1 the sampling method described in Eq 21 is the same as the triple corruption used by TransE BIBREF6 TransR BIBREF15 TransH BIBREF28 and many other closedworld KGC models When E lbrace tlangle hrtrangle in mathbf Trbrace which is the number of all true triples given a partial triple langle h r rangle Eq 21 is the same as ProjElistwise BIBREF16 The previous section described the design decisions and modelling assumptions of ConMask In this section we present the results of experiments performed on old and new data sets in both openworld and closedworld KGC tasks Training parameters were set empirically but without finetuning We set the word embedding size k200 maximum entity content and name length kckn512 The word embeddings are from the publicly available pretrained 200dimensional GloVe embeddings BIBREF29 The content masking window size km6 number of FCN layers kfcn3 where each layer has 2 convolutional layers and a BN layer with a moving average decay of 09 followed by a dropout with a keep probability p05 Maxpooling in each FCN layer has a pool size and stride size of 2 The minibatch size used by ConMask is kb200 We use Adam as the optimizer with a learning rate of 102 The target sampling set sizes for E and E are 1 and 4 respectively All openworld KGC models were run for at most 200 epochs All compared models used their default parameters ConMask is implemented in TensorFlow The source code is available at httpsgithubcombxshiConMask The Freebase 15K FB15k data set is widely used in KGC But FB15k is fraught with reversed or synonymtriples BIBREF30 and does not provide sufficient textual information for contentbased KGC methods to use Due to the limited text content and the redundancy found in the FB15K data set we introduce two new data sets DBPedia50k and DBPedia500k for both openworld and closedworld KGC tasks Statistics of all data sets are shown in Tab 2 The methodology used to evaluate the openworld and closedworld KGC tasks is similar to the related work Specifically we randomly selected 90 of the entities in the KG and induced a KG subgraph using the selected entities and from this reduced KG we further removed 10 of the relationships ie graphedges to create KG textrm train All other triples not included in KG textrm train are held out for the test set For the openworld KGC task we generated a test set from the 10 of entities that were held out of KG textrm train This held out set has relationships that connect the test entities to the entities in KG textrm train So given a held out entityrelationship partial triple that was not seen during training our goal is to predict the correct target entity within KG textrm train To mitigate the excessive cost involved in computing scores for all entities in the KG we applied a target filtering method to all KGC models Namely for a given partial triple langle h r rangle or langle r t rangle if a target entity candidate has not been connected via relationship r before in the training set then it is skipped otherwise we use the KGC model to calculate the actual ranking score Simply put this removes relationshipentity combinations that have never before been seen and are likely to represent nonsensical statements The experiment results are shown in Tab 1 As a naive baseline we include the target filtering baseline method in Tab 1 which assigns random scores to all the entities that pass the target filtering Semantic Averaging is a simplified model which uses contextual features only DKRL is a twolayer CNN model that generates entity embeddings with entity description BIBREF20 We implemented DKRL ourselves and removed the structuralrelated features so it can work under openworld KGC settings We find that the extraction features in ConMask do boost mean rank performance by at least 60 on both data sets compared to the extractionfree Semantic Averaging Interestingly the performance boost on the larger DBPedia500k data set is more significant than the smaller DBPedia50k which indicates that the extraction features are able to find useful textual information from the entity descriptions Because the openworld assumption is less restrictive than the closedworld assumption it is possible for ConMask to perform closedworld tasks even though it was not designed to do so So in Tab 3 we also compare the ConMask model with other closedworld methods on the standard FB15k data set as well as the two new data sets Results from TransR are missing from the DBPedia500k data set because the model did not complete training after 5 days We find that ConMask sometimes outperforms closedworld methods on the closedworld task ConMask especially shows improvement on the DBPedia50k data set this is probably because the random sampling procedure used to create DBPedia50k generates a sparse graph closedworld KGC models which rely exclusively on structural features have a more difficult time with subsampled KGs In this section we elaborate on some actual prediction results and show examples that highlight the strengths and limitations of the ConMask model Table 4 shows 4 KGC examples In each case ConMask was provided the head and the relationship and asked to predict the tail entity In most cases ConMask successfully ranks the correct entities within the top3 results Gabrielle Stantons notableWork is an exception Although Stanton did work on Star Trek DBPedia indicates that her most notable work is actually The Vampire Diaries which ranked 4textrm th The reason for this error is because the indicator word for The Vampire Diaries was consulting producer which was not highly correlated to the relationship name notable work from the models perspective Another interesting result was the prediction given from the partial triple langle The Time Machine writer rangle The ConMask model ranked the correct screenwriter David Duncan as the 2textrm nd candidate but the name David Duncan does not actually appear in the films description Nevertheless the ConMask model was able to capture the correct relationship because the words The Time Machine appeared in the description of David Duncan as one of his major works Although ConMask outperforms other KGC models on metrics such as Mean Rank and MRR it still has some limitations and room for improvement First due to the nature of the relationshipdependent content masking some entities with names that are similar to the given relationships such as the Languageentity in the results of the languageFamilyrelationship and the Writerentity in the results of the writerrelationship are ranked with a very high score In most cases the correct target entity will be ranked above relationshiprelated entities Yet these entities still hurt the overall performance It may be easy to apply a filter to modify the list of predicted target entities so that entities that are same as the relationship will be rearranged We leave this task as a matter for future work In the present work we introduced a new openworld Knowledge Graph Completion model ConMask that uses relationshipdependent content masking fully convolutional neural networks and semantic averaging to extract relationshipdependent embeddings from the textual features of entities and relationships in KGs Experiments on both openworld and closedworld KGC tasks show that the ConMask model has good performance in both tasks Because of problems found in the standard KGC data sets we also released two new DBPedia data sets for KGC research and development The ConMask model is an extraction model which currently can only predict relationships if the requisite information is expressed in the entitys description The goal for future work is to extend ConMask with the ability to find new or implicit relationships </s> |
<s>[INST] The STEM-ECR Dataset: Grounding Scientific Entity References in STEM Scholarly Content to Authoritative Encyclopedic and Lexicographic Sources [/INST] 11em 111em 1111em Jennifer DSouza Anett Hoppe Arthur Brack Mohamad Yaser Jaradeh Sren Auer Ralph Ewerth TIB Leibniz Information Centre for Science and Technology Hannover Germany jenniferdsouzaanetthoppearthurbrackyaserjaradehauerralphewerthtibeu We introduce the STEM Science Technology Engineering and Medicine Dataset for Scientific Entity Extraction Classification and Resolution version 10 STEMECR v10 The STEMECR v10 dataset has been developed to provide a benchmark for the evaluation of scientific entity extraction classification and resolution tasks in a domainindependent fashion It comprises abstracts in 10 STEM disciplines that were found to be the most prolific ones on a major publishing platform We describe the creation of such a multidisciplinary corpus and highlight the obtained findings in terms of the following features 1 a generic conceptual formalism for scientific entities in a multidisciplinary scientific context 2 the feasibility of the domainindependent human annotation of scientific entities under such a generic formalism 3 a performance benchmark obtainable for automatic extraction of multidisciplinary scientific entities using BERTbased neural models 4 a delineated 3step entity resolution procedure for human annotation of the scientific entities via encyclopedic entity linking and lexicographic word sense disambiguation and 5 human evaluations of Babelfy returned encyclopedic links and lexicographic senses for our entities Our findings cumulatively indicate that human annotation and automatic learning of multidisciplinary scientific concepts as well as their semantic disambiguation in a wideranging setting as STEM is reasonable Entity Recognition Entity Classification Entity Resolution Entity Linking Word Sense Disambiguation Evaluation Corpus Language Resource By starting with a STEM corpus of scholarly abstracts for annotating with scientific entities we differ from existing work addressing this task since we are going beyond the domain restriction that so far seems to encompass scientific IE For entity annotations we rely on existing scientific concept formalisms BIBREF0 BIBREF1 BIBREF2 that appear to propose generic scientific concept types that can bridge the domains we consider thereby offering a uniform entity selection framework In the following subsections we describe our annotation task in detail after which we conclude with benchmark results The corpus for computing interannotator agreement was annotated by two postdoctoral researchers in Computer Science To develop annotation guidelines a small pilot annotation exercise was performed on 10 abstracts one per domain with a set of surmised generically applicable scientific concepts such as Task Process Material Object Method Data Model Results etc taken from existing work Over the course of three annotation trials these concepts were iteratively pruned where concepts that did not cover all domains were dropped resulting in four finalized concepts viz Process Method Material and Data as our resultant set of generic scientific concepts see Table TABREF3 for their definitions The subsequent annotation task entailed linguistic considerations for the precise selection of entities as one of the four scientific concepts based on their partofspeech tag or phrase type Process entities were verbs eg prune in Agr verb phrases eg integrating results in Mat or noun phrases eg this transport process in Bio Method entities comprised noun phrases containing phrase endings such as simulation method algorithm scheme technique system etc Material were nouns or noun phrases eg forest trees in Agr electrons in Ast or Che tephra in ES and majority of the Data entities were numbers otherwise noun phrases eg 25pm 15kms1 representing a velocity value in Ast plant available P status in Agr Summarily the resulting annotation guidelines hinged upon the following five considerations To ensure consistent scientific entity spans entities were annotated as definite noun phrases where possible In later stages the extraneous determiners and articles could be dropped as deemed appropriate Coreferring lexical units for scientific entities in the context of a single abstract were annotated with the same concept type Quantifiable lexical units such as numbers eg years 1999 measurements 4km or even as phrases eg vascular risk were annotated as Data Where possible the most precise text reference ie phrases with qualifiers regarding materials used in the experiment were annotated For instance carbon atoms in graphene was annotated as a single Material entity and not separately as carbon atoms graphene Any confusion in classifying scientific entities as one of four types was resolved using the following concept precedence Method Process Data Material where the concept appearing earlier in the list was preferred After finalizing the concepts and updating the guidelines the final annotation task proceeded in two phases In phase I five abstracts per domain ie 50 abstracts were annotated by both annotators and the interannotator agreement was computed using Cohens kappa BIBREF4 Results showed a moderate interannotator agreement at 052 kappa Next in phase II one of the annotators interviewed subject specialists in each of the ten domains about the choice of concepts and her annotation decisions on their respective domain corpus The feedback from the interviews were systematically categorized into error types and these errors were discussed by both annotators Following these discussions the 50 abstracts from phase I were independently reannotated The annotators could obtain substantial overall agreement of 076 kappa after phase II In Table TABREF16 we report the IAA scores obtained per domain and overall The scores show that the annotators had a substantial agreement in seven domains while only a moderate agreement was reached in three domains viz Agr Mat and Ast We discuss some of the changes the interviewer annotator made in phase II after consultation with the subject experts In total 21 of the phase I annotations were changed Process accounted for a major proportion nearly 54 of the changes Considerable inconsistency was found in annotating verbs like increasing decreasing enhancing etc as Process or not Interviews with subject experts confirmed that they were a relevant detail to the research investigation and hence should be annotated So 61 of the Process changes came from additionally annotating these verbs Material was the second predominantly changed concept in phase II accounting for 23 of the overall changes Nearly 32 of the changes under Material came from consistently reannotating phrases about models tools and systems accounting for another 22 of its changes where spatial locations were an essential part of the investigation such as in the Ast and ES domains they were decided to be included in the phase II set as Material Finally there were some changes that emerged from lack of domain expertise This was mainly in the medical domain 43 of the overall changes in resolving confusion in annotating Process and Method concept types Most of the remaining changes were based on the treatment of conjunctive spans or lists Subsequently the remaining 60 abstracts six per domain were annotated by one annotator This last phase also involved reconciliation of the earlier annotated 50 abstracts to obtain a gold standard corpus Table TABREF17 shows our annotated corpus characteristics Our corpus comprises a total of 6127 scientific entities including 2112 Process 258 Method 2099 Material and 1658 Data entities The number of entities per abstract directly correlates with the length of the abstracts Pearsons R 097 Among the concepts Process and Material directly correlate with abstract length R 08 and 083 respectively while Data has only a slight correlation R 035 and Method has no correlation R 002 In Figure FIGREF18 we show an example instance of a manually created text graph from the scientific entities in one abstract The graph highlights that linguistic relations such as synonymy hypernymy meronymy as well as OpenIE relations are poignant even between scientific entities In the second stage of the study we perform word sense disambiguation and link our entities to authoritative sources Aside from the four scientific concepts facilitating a common understanding of scientific entities in a multidisciplinary setting the fact that they are just four made the human annotation task feasible Utilizing additional concepts would have resulted in a prohibitively expensive human annotation task Nevertheless there are existing datasets particularly in the biomedical domain eg GENIA BIBREF6 that have adopted the conceptual framework in rich domainspecific semantic ontologies Our work while related is different since we target the annotation of multidisciplinary scientific entities that facilitates a low annotation entrance barrier to producing such data This is beneficial since it enables the task to be performed in a domainindependent manner by researchers but perhaps not crowdworkers unless screening tests for a certain level of scientific expertise are created Nonetheless we recognize that the four categories might be too limiting for realworld usage Further the scientific entities from stage 1 remain susceptible to subjective interpretation without additional information Therefore in a similar vein to adopting domainspecific ontologies we now perform entity linking EL to the Wikipedia and word sense disambiguation WSD to Wiktionary The same pair of annotators as before were involved in this stage of the study to determine the annotation agreement During the annotation procedure each annotator was shown the entities grouped by domain and file name in Google Excel Sheet columns alongside a view of the current abstract of entities being annotated in the BRAT interface stenetorp2012brat for context information about the entities For entity resolution ie linking and disambiguation the annotators had local installations of specific timestamped Wikipedia and Wiktionary dumps to enable future persistent references to the links since the Wiki sources are actively revised They queried the local dumps using the DKPro JWPL tool BIBREF8 for Wikipedia and the DKPro JWKTL tool BIBREF9 for Wiktionary where both tools enable optimized search through the large Wiki data volume Through iterative pilot annotation trials on the same pilot dataset as before the annotators delineated an ordered annotation procedure depicted in the flowchart in Figure FIGREF28 There are two main annotation phases viz a preprocessing phase determining linkability determining whether an entity is decomposable into shorter collocations and the entity resolution phase The actual annotation task then proceeded in which to compute agreement scores the annotators worked on the same set of 50 scholarly abstracts that they had used earlier to compute the scores for the scientific entity annotations In this first step entities that conveyed a sense of scientific jargon were deemed linkable A natural question that arises in the context of the Linkability criteria is Which stage 1 annotated scientific entities were now deemed unlinkable They were 1 Data entities that are numbers 2 entities that are coreference mentions which as isolated units lost their precise sense eg development and 3 Process verbs eg decreasing reconstruct etc Still having identified these cases a caveat remained except for entities of type Data the remaining decisions made in this step involved a certain degree of subjectivity because for instance not all Process verbs were unlinkable eg flooding Nonetheless at the end of this step the annotators obtained a high IAA score at 089 kappa From the agreement scores we found that the Linkability decisions could be made reliably and consistently on the data While preference was given to annotating noncompositional noun phrases as scientific entities in stage 1 consecutive occurrences of entities of the same concept type separated only by prepositions or conjunctions were merged into longer spans As examples consider the phrases geysers on south polar region and plume of water ice molecules and dust in Figure FIGREF18 These phrases respectively can be meaningfully split as geysers and south polar region for the first example and plume water ice molecules and dust for the second As demonstrated in these examples the stage 1 entities we split in this step are syntacticallyflexible multiword expressions which did not have a strict constraint on composition BIBREF10 For such expressions we query Wikipedia or Google to identify their splits judging from the number of results returned and whether in the results the phrases appeared in authoritative sources eg as overview topics in publishing platforms such as ScienceDirect Since search engines operate on a vast amount of data they are a reliable source for determining phrases with a strong statistical regularity ie determining collocations With a focus on obtaining agreement scores for entity resolution the annotators bypass this stage for computing independent agreement and attempted it mutually as follows One annotator determined all splits wherever required first The second annotator acted as judge by going through all the splits and proposed new splits in case of disagreement The disagreements were discussed by both annotators and the previous steps were repeated iteratively until the dataset was uniformly split After this stage both annotators have the same set of entities for resolution In this stage the annotators resolved each entity from the previous step to encyclopedic and lexicographic knowledge bases While in principle multiple knowledge sources can be leveraged this study only examines scientific entities in the context of their Wikilinkability Wikipedia as the largest online encyclopedia with nearly 59 million English articles offers a wide coverage of realworld entities and based on its vast community of editors with editing patterns at the rate of 18 edits per second is considered a reliable source of information It is pervasively adopted in automatic EL tasks BIBREF11 BIBREF12 BIBREF13 to disambiguate the names of people places organizations etc to their realworld identities We shift from this focus on proper names as the traditional Wikification EL purpose has been to its thus far seemingly less tappedin conceptual encyclopedic knowledge of nominal scientific entities Wiktionary is the largest freely available dictionary resource Owing to its vast community of curators it rivals the traditional expertcurated lexicographic resource WordNet BIBREF14 in terms of coverage and updates where the latter evolves more slowly For English Wiktionary has nine times as many entries and at least five times as many senses compared to WordNet As a more pertinent neologism in the context of our STEM data consider the sense of term dropout as a method for regularizing the neural network algorithms which is already present in Wiktionary While WSD has been traditionally used WordNet for its highquality semantic network and longer prevalence in the linguistics community cf Navigli navigli2009word for a comprehensive survey we adopt Wiktionary thus maintaining our focus on collaboratively curated resources In WSD entities from all partsofspeech are enriched wrt language and wordsmithing But it excludes indepth factual and encyclopedic information which otherwise is contained in Wikipedia Thus Wikipedia and Wiktionary are viewed as largely complementary Given a scholarly abstract A comprising a set of entities E lbrace e1 eNrbrace the annotation goal is to produce a mapping from E to a set of Wikipedia pages p1pN and Wiktionary senses s1sN as R lbrace p1s1 pNsNrbrace For entities without a mapping the corresponding p or s refers to Nil The annotators followed comprehensive guidelines for ER including exceptions Eg the conjunctive phrase acidalkaline phosphatase activity was semantically treated as the following two phrases acid phosphatase activity or alkaline phosphatase activity for EL however in the text it was retained as acid and alkaline phosphatase activity Since WSD is performed over exact wordforms without assuming any semantic extension it was not performed for acid Annotations were also made for complex forms of reference such as meronymy eg space instrument CAPS to spacecraft wikiCassini Huygens of which it is a part or hypernymy eg parents in genepool parents to wikiAncestor As a result of the annotation task the annotators obtained 8287 rate of agreement in the EL task and a kappa score of 086 in the WSD task Contrary to WSD expectations as a challenging linguistics task BIBREF15 we show high agreement this we attribute to the entities direct scientific sense and availability in Wiktionary eg dropout Subsequently the ER annotation for the remaining 60 abstracts six per domain were performed by one annotator This last phase also involved reconciliation of the earlier annotated 50 abstracts to obtain a gold standard corpus In this stage 2 corpus linkability of the scientific entities was determined at 746 Of these 617 were split into shorter collocations at 174 splits per split entity Detailed statistics are presented in Table TABREF36 In the table the domains are ranked by the total number of their linkable entities fourth column Ast has the highest proportion of linked entities at 873 which comprises 104 of all the linked entities and disambiguated entities at 714 forming 85 of the overall disambiguated entities From an EL perspective we surmize that articles on space topics are well represented in Wikipedia For WSD Bio ES and Med predictably have the least proportion of disambiguated entities at 523 546 and 555 respectively since of all our domains these especially rely on high degree scientific jargon while WSD generally tends to be linguistically oriented in a generic sense As a summary linked and disambiguated entities had a high correlation with the total linkable entities R 098 and 089 respectively In Table TABREF37 the ER annotation results are shown as POS tag distributions The POS tags were obtained from Wiktionary where entities that couldnt be disambiguated are tagged as SW Single Word or MWE MultiWord Expression These tags have a coarser granularity compared to the traditionally followed Penn Treebank tags with some unconventional tagging patterns eg North Sea as NNP in vivo as ADJ From the distributions except for nouns being the most EL and WSD instances the rest of the table differs significantly between the two tasks in a sense reflecting the nature of the tasks While MWE are the second highest EL instances its corresponding PHRASE type is least represented in WSD In contrast while adverbs are the second highest in WSD they are least in EL We do not observe a significant impact of the longtailed list phenomenon of unresolved entities in our data cf Table TABREF36 only 17 did not have EL annotations Results on more recent publications should perhaps serve more conclusive in this respect for new concepts introducedthe abstracts in our dataset were published between 2012 and 2014 The STEMECR v10 corpus of scientific abstracts offers multidisciplinary Process Method Material and Data entities that are disambiguated using Wikibased encyclopedic and lexicographic sources thus facilitating links between scientific publications and realworld knowledge see the concepts enrichment we obtain from Wikipedia for our entities in Figure We have found that these Wikipedia categories do enable a semantic enrichment of our entities over our generic four concept formalism as Process Material Method and Data as an illustration the top 30 Wiki categories for each of our four generic concept types are shown in the Appendix Further considering the various domains in our multidisciplinary STEM corpus notably the inclusion of understudied domains like Mathematics Astronomy Earth Science and Material Science makes our corpus particularly unique wrt the investigation of their scientific entities This is a step toward exploring domain independence in scientific IE Our corpus can be leveraged for machine learning experiments in several settings as a vital activelearning testbed for curating more varied entity representations BIBREF16 to explore domainindependence versus domaindependence aspects in scientific IE for EL and WSD extensions to other ontologies or lexicographic sources and as a knowledge resource to train a reading machine such as PIKES BIBREF17 or FRED BIBREF18 that generate more knowledge from massive streams of interdisciplinary scientific articles We plan to extend this corpus with relations to enable building knowledge representation models such as knowledge graphs in a domainindependent manner We thank the anonymous reviewers for their comments and suggestions We also thank the subject specialists at TIB for their helpful feedback in the first part of this study This work was cofunded by the European Research Council for the project ScienceGRAPH Grant agreement ID 819536 and by the TIB Leibniz Information Centre for Science and Technology A1 Proportion of the Generic Scientific Entities To offer better insights to our STEM corpus for its scientific entity annotations made in part 1 in Figure FIGREF40 below we visually depict the proportion of Process Method Material and Data entities per domain The Figure serves a complementary view to our corpus compared with the dataset statistics shown in Table TABREF17 It shows that the Ast domain has the highest proportion of scientific entities overall On the other hand per generic type Bio has the most Process entities CS has the most Method entities Ast has the most Material closely followed by Agr and Eng has the most Data A2 Cohens kappa Computation Setup in Section 412 Linkability Given the stage 1 scientific entities the annotators could make one of two decisions a an entity is linkable or b an entity is unlinkable These decisions were assigned numeric indexes ie 1 for decision a and 1 for decision b and can take on one of four possible combinations based on the two annotators decisions 11 11 11 and 11 The kappa scores were then computed on this data representation WSD Agreement In order to compute the WSD agreement the Wiktionary structure for organizing the words needed to be taken into account Its structure is as follows Each word in the Wiktionary lexicographic resource is categorized based on etymology and within each etymological category by the various partofspeech tags the word can take Finally within each POS type is a gloss list where each gloss corresponds to a unique word sense Given the abovementioned Wiktionary structure the initial setup for the blind WSD annotation task entailed that the annotators were given the same reference POS tags within an etymology for the split singleword entities in the corpus Next as data to compute kappa scores each annotatorassigned gloss sense was given a numeric index and agreement was computed based on matches or nonmatches between indexes A3 Perdomain Interannotator Agreement for Entity Resolution To supplement the overall InterAnnotator Agreement IAA scores reported in Section 412 Entity Resolution ER Annotation for the EL and WSD tasks in Table TABREF43 below we additionally report the IAA scores for our ER tasks ie EL and WSD per domain in the STEMECR corpus First considering the domains where the highest ER agreement scores were obtained For EL the IAA score was highest in the MS domain While for WSD the IAA score was highest in the Bio domain Next considering the domains where the agreement was least for the two tasks We found the the EL agreement was least for CS and the WSD agreement was least for Mat In the case of low EL agreement it can be attributed to two main cases only one of the annotators found a link or the annotators linked to related pages on the same theme as the entity eg wikiRulebasedmodeling versus wikiRulebasedmachinelearning for rulebased system And in the case of the low WSD agreement obtained on Mat we see that owing to broad terms like set matrix groups etc in the domain which could be disambiguated to more than one Wiktionary sense correctly the IAA agreement was low A4 Babelfys Precision P and Recall R Computation for Entity Resolution in Figure For the P and R scores reported in Figure the true positives TP false negatives FN true negatives TN and false positives FP were computed as follows TP humanannotated entities that have a ELWSD match with Babelfy results for Nil a match is considered as no result from the automatic system FN humanannotated entities that have no ELWSD match with Babelfy results TN spurious Babelfycreated strings as entities that do not have a ELWSD result and FP spurious Babelfycreated entities that have a ELWSD result A5 Top 30 Wikipedia Categories for Process Method Material and Data In part 1 of the study we categorized the scientific entities by our four generic concept formalism comprising Process Method Material and Data Linking the entities to Wikipedia further enables their broadened categorization While in Figure is depicted the rich set of Wikipedia categories obtained overall here in Tables TABREF44 and TABREF45 we show the top 30 Wikipedia categories for the scientific entities by their four concept types we observe the most of the Wikipedia categories pertinently broaden the semantic expressivity of each of our four concepts Further that in each type they are diverse reflecting the underlying data domains in our corpus As examples consider the Wikipedia categories for the Data scientific entities SIBaseQuantities category over the entity Kelvin in Che FluidDynamics in Eng and MS domains and SolarCalendars in the Ast domain </s> |
<s>[INST] Using Gaussian Processes for Rumour Stance Classification in Social Media [/INST] There is an increasing need to interpret and act upon rumours spreading quickly through social media during breaking news where new reports are released piecemeal and often have an unverified status at the time of posting Previous research has posited the damage that the diffusion of false rumours can cause in society and that corrections issued by news organisations or state agencies such as the police may not necessarily achieve the desired effect sufficiently quickly BIBREF0 BIBREF1 Being able to determine the accuracy of reports is therefore crucial in these scenarios However the veracity of rumours in circulation is usually hard to establish BIBREF2 since as many views and testimonies as possible need to be assembled and examined in order to reach a final judgement Examples of rumours that were later disproven after being widely circulated include a 2010 earthquake in Chile where rumours of a volcano eruption and a tsunami warning in Valparaiso spawned on Twitter BIBREF3 Another example is the England riots in 2011 where false rumours claimed that rioters were going to attack Birminghams Childrens Hospital and that animals had escaped from London Zoo BIBREF4 Previous work by ourselves and others has argued that looking at how users in social media orient to rumours is a crucial first step towards making an informed judgement on the veracity of a rumourous report BIBREF5 BIBREF6 BIBREF3 For example in the case of the riots in England in August 2011 Procter et al manually analysed the stance expressed by users in social media towards rumours BIBREF4 Each tweet discussing a rumour was manually categorised as supporting denying or questioning it It is obvious that manual methods have their disadvantages in that they do not scale well the ability to perform stance categorisation of tweets in an automated way would be of great use in tracking rumours flagging those that are largely denied or questioned as being more likely to be false Determining the stance of social media posts automatically has been attracting increasing interest in the scientific community in recent years as this is a useful first step towards more indepth rumour analysis Work on automatic rumour stance classification however is still in its infancy with some methods ignoring temporal ordering and rumour identities eg BIBREF10 while others being rulebased and thus with unclear generalisability to new rumours BIBREF7 Our work advances the stateoftheart in tweetlevel stance classification through multitask learning and Gaussian Processes This article substantially extends our earlier short paper BIBREF11 fistly by using a second dataset which enables us to test the generalisability of our results Secondly a comparison against additional baseline classifiers and recent stateoftheart approaches has been added to the experimental section Lastly we carried out a more thorough analysis of the results including now perclass performance scores which furthers our understanding of rumour stance classification In comparison to the stateoftheart our approach is novel in several crucial aspects Based on the assumption of a common underlying linguistic signal in rumours on different topics we build a transfer learning system based on Gaussian Processes that can classify stance in newly emerging rumours The paper reports results on two different rumour datasets and explores two different experimental settings without any training data and with very limited training data We refer to these as Our results demonstrate that Gaussian Processbased multitask learning leads to significantly improved performance over stateoftheart methods and competitive baselines as demonstrated on two very different datasets The classifier relying on Gaussian Processes performs particularly well over the rest of the baseline classifiers in the Leave Part Out setting proving that it does particularly well in determining the distribution of supporting denying and questioning tweets associated with a rumour Estimating the distribution of stances is the key aspect for which our classifier performs especially well compared to the baseline classifiers This section provides a more indepth motivation of the rumour stance detection task and an overview of the stateoftheart methods and their limitations First however let us start by introducing the formal definition of a rumour There have been multiple attempts at defining rumours in the literature Most of them are complementary to one another with slight variations depending on the context of their analyses The core concept that most researchers agree on matches the definition that major dictionaries provide such as the Oxford English Dictionary defining a rumour as a currently circulating story or report of uncertain or doubtful truth For instance DiFonzo and Bordia BIBREF12 defined rumours as unverified and instrumentally relevant information statements in circulation Researchers have long looked at the properties of rumours to understand their diffusion patterns and to distinguish them from other kinds of information that people habitually share BIBREF13 Allport and Postman BIBREF2 claimed that rumours spread due to two factors people want to find meaning in things and when faced with ambiguity people try to find meaning by telling stories The latter factor also explains why rumours tend to change in time by becoming shorter sharper and more coherent This is the case it is argued because in this way rumours explain things more clearly On the other hand Rosnow BIBREF14 claimed that there are four important factors for rumour transmission Rumours must be outcomerelevant to the listener must increase personal anxiety be somewhat credible and be uncertain Furthermore Shibutani BIBREF15 defined rumours to be a recurrent form of communication through which men sic caught together in an ambiguous situation attempt to construct a meaningful interpretation of it by pooling their intellectual resources It might be regarded as a form of collective problemsolving In contrast with these three theories Guerin and Miyazaki BIBREF16 state that a rumour is a form of relationshipenhancing talk Building on their previous work they recall that many ways of talking serve the purpose of forming and maintaining social relationships Rumours they say can be explained by such means In our work we adhere to the widely accepted fact that rumours are unverified pieces of information More specifically following BIBREF5 we regard a rumour in the context of breaking news as a circulating story of questionable veracity which is apparently credible but hard to verify and produces sufficient skepticism andor anxiety so as to motivate finding out the actual truth One particularly influential piece of work in the field of rumour analysis in social media is that by Mendoza et al BIBREF3 By manually analysing the data from the earthquake in Chile in 2010 the authors selected 7 confirmed truths and 7 false rumours each consisting of close to 1000 tweets or more The veracity value of the selected stories was corroborated by using reliable sources Each tweet from each of the news items was manually classified into one of the following classes affirmation denial questioning unknown or unrelated In this way each tweet was classified according to the position it showed towards the topic it was about The study showed that a much higher percentage of tweets about false rumours are shown to deny the respective rumours approximately 50 This is in contrast to rumours later proven to be true where only 03 of tweets were denials Based on this authors claimed that rumours can be detected using aggregate analysis of the stance expressed in tweets Recent research put together in a special issue on rumours and social media BIBREF17 also shows the increasing interest of the scientific community in the topic BIBREF18 proposed an agenda for research that establishes an interdisciplinary methodology to explore in full the propagation and regulation of unverified content on social media BIBREF19 described an approach for geoparsing social media posts in realtime which can be of help to determine the veracity of rumours by tracking down the posters location The contribution of BIBREF20 to rumour resolution is to build an automated system that rates the level of trust of users in social media hence enabling to get rid of users with low reputation Complementary to these approaches our objective is to determine the stance of tweets towards a rumour which can then be aggregated to establish an overall veracity score for the rumour Another study that shows insightful conclusions with respect to stance towards rumours is that by Procter et al BIBREF4 The authors conducted an analysis of a large dataset of tweets related to the riots in the UK which took place in August 2011 The dataset collected in the riots study is one of the two used in our experiments and we describe it in more detail in section Datasets After grouping the tweets into topics where each represents a rumour they were manually categorised into different classes namely media reports which are tweets sent by mainstream media accounts or journalists connected to media pictures being tweets uploading a link to images rumours being tweets claiming or counter claiming something without giving any source reactions consisting of tweets being responses of users to the riots phenomenon or specific event related to the riots Besides categorisation of tweets by type Procter et al also manually categorised the accounts posting tweets into different types such as mainstream media only online media activists celebrities bots among others What is interesting for the purposes of our work is that the authors observed the following fourstep pattern recurrently occurring across the collected rumours a rumour is initiated by someone claiming it may be true a rumour spreads together with its reformulations counter claims appear a consensus emerges about the credibility of the rumour This leads the authors to the conclusion that the process of intersubjective sense making by Twitter users plays a key role in exposing false rumours This finding together with subsequent work by Tolmie et al into the conversational characteristics of microblogging BIBREF6 has motivated our research into automating stance classification as a methodology for accelerating this process Qazvinian et al BIBREF10 conducted early work on rumour stance classification They introduced a system that analyzes a set of tweets associated with a given topic predefined by the user Their system would then classify each of the tweets as supporting denying or questioning a tweet We have adopted this scheme in terms of the different types of stance in the work we report here However their work ended up merging denying and questioning tweets for each rumour into a single class converting it into a 2way classification problem of supporting vs denyingorquestioning Instead we keep those classes separate and following Procter et al we conduct a 3way classification BIBREF21 Another important characteristic that differentiates Qazvinian et als work from ours is that they looked at support and denial on longstanding rumours such as the fact that many people conjecture whether Barack Obama is a Muslim or not By contrast we look at rumours that emerge in the context of fastpaced breaking news situations where new information is released piecemeal often with statements that employ hedging words such as reportedly or according to sources to make it clear that the information is not fully verified at the time of posting This is a very different scenario from that in Qazvinian et als work as the emergence of rumourous reports can lead to sudden changes in vocabulary leading to situations that might not have been observed in the training data Another aspect that we deal with differently in our work aiming to make it more realistically applicable to a real world scenario is that we apply the method to each rumour separately Ultimately our goal is to classify new emerging rumours which can differ from what the classifier has observed in the training set Previous work ignored this separation of rumours by pooling together tweets from all the rumours in their collections both in training and test data By contrast we consider the rumour stance classification problem as a form of transfer learning and seek to classify unseen rumours by training the classifier from previously labelled rumours We argue that this makes a more realistic classification scenario towards implementing a realworld rumourtracking system Following a short gap there has been a burst of renewed interest in this task since 2015 For example Liu et al BIBREF9 introduce rulebased methods for stance classification which were shown to outperform the approach by BIBREF10 Similarly BIBREF7 use regular expressions instead of an automated method for rumour stance classification Hamidian and Diab BIBREF22 use Tweet Latent Vectors to assess the ability of performing 2way classification of the stance of tweets as either supporting or denying a rumour They study the extent to which a model trained on historical tweets can be used for classifying new tweets on the same rumour This however limits the methods applicability to longrunning rumours only The work closest to ours in terms of aims is Zeng et al BIBREF23 who explored the use of three different classifiers for automated rumour stance classification on unseen rumours In their case classifiers were set up on a 2way classification problem dealing with tweets that support or deny rumours In the present work we extend this research by performing 3way classification that also deals with tweets that question the rumours Moreover we adopt the three classifiers used in their work namely Random Forest Naive Bayes and Logistic Regression as baselines in our work Lastly researchers BIBREF7 BIBREF24 have focused on the related task of detecting rumours in social media While a rumour detection system could well be the step that is applied prior to our stance classification system here we assume that rumours have already been identified to focus on the subsequent step of determining stances Individual tweets may discuss the same rumour in different ways where each user expresses their own stance towards the rumour Within this scenario we define the tweet level rumour stance classification task as that in which a classifier has to determine the stance of each tweet towards the rumour More specifically given the tweet ti as input the classifier has to determine which of the set Y lbrace supporting denying questioningrbrace applies to the tweet yti in Y Here we define the task as a supervised classification problem where the classifier is trained from a labelled set of tweets and is applied to tweets on a new unseen set of rumours Let R be a set of rumours each of which consists of tweets discussing it forall r in R Tr lbrace tr1 cdots trrnrbrace T cup r in R Tr is the complete set of tweets from all rumours Each tweet is classified as supporting denying or questioning with respect to its rumour yti in lbrace s d qrbrace We formulate the problem in two different settings First we consider the Leave One Out LOO setting which means that for each rumour r in R we construct the test set equal to Tr and the training set equal to T setminus Tr This is the most challenging scenario where the test set contains an entirely unseen rumour The second setting is Leave Part Out LPO In this formulation a very small number of initial tweets from the target rumour is added to the training set lbrace tr1 cdots trrkrbrace This scenario becomes applicable typically soon after a rumour breaks out and journalists have started monitoring and analysing the related tweet stream The experimental section investigates how the number of initial training tweets influences classification performance on a fixed test set namely lbrace trrl cdots trrnrbrace lk The tweetlevel stance classification problem here assumes that tweets from the training set are already labelled with the rumour discussed and the attitude expressed towards that This information can be acquired either via manual annotation as part of expert analysis as is the case with our dataset or automatically eg using patternbased rumour detection BIBREF7 Our method is then used to classify the stance expressed in each new tweet from the test set We evaluate our work on two different datasets which we describe below We use two recent datasets from previous work for our study both of which adapt to our needs We do not use the dataset by BIBREF10 given that it uses a different annotation scheme limited to two categories of stances The reason why we use the two datasets separately instead of combining them is that they have very different characteristics Our experiments instead enable us to assess the ability of our classifier to deal with these different characteristics The first dataset consists of several rumours circulating on Twitter during the England riots in 2011 see Table 2 The dataset was collected by tracking a long set of keywords associated with the event The dataset was analysed and annotated manually as supporting questioning or denying a rumour by a team of social scientists studying the role of social media during the riots BIBREF4 As can be seen from the dataset overview in Table 2 different rumours exhibit varying proportions of supporting denying and questioning tweets which was also observed in other studies of rumours BIBREF3 BIBREF10 These variations in the number of instances for each class across rumours posits the challenge of properly modelling a rumour stance classifier The classifier needs to be able to deal with a test set where the distribution of classes can be very different to that observed in the training set Thus we perform 7fold crossvalidation in the experiments each fold having six rumours in the training set and the remaining rumour in the test set The seven rumours were as follows BIBREF4 Rioters had attacked London Zoo and released the animals Rioters were gathering to attack Birminghams Childrens Hospital Rioters had set the London Eye on fire Police had beaten a sixteen year old girl The Army was being mobilised in London to deal with the rioters Rioters had broken into a McDonalds and set about cooking their own food A store belonging to the Miss Selfridge retail group had been set on fire in Manchester Additionally we use another rumour dataset associated with five different events which was collected as part of the PHEME FP7 research project and described in detail in BIBREF5 BIBREF25 Note that the authors released datasets for nine events but here we remove nonEnglish datasets as well as small English datasets each of which includes only 1 rumour as opposed to the 40 rumours in each of the datasets that we are using We summarise the details of the five events we use from this dataset in Table 3 In contrast to the England riots dataset the PHEME datasets were collected by tracking conversations initiated by rumourous tweets This was done in two steps First we collected tweets that contained a set of keywords associated with a story unfolding in the news We will be referring to the latter as an event Next we sampled the most retweeted tweets on the basis that rumours by definition should be a circulation story which produces sufficient skepticism or anxiety This allows us to filter potentially rumourous tweets and collect conversations initiated by those Conversations were tracked by collecting replies to tweets and therefore unlike the England riots this dataset also comprises replying tweets by definition This is an important characteristic of the dataset as one would expect that replies are generally shorter and potentially less descriptive than the source tweets that initiated the conversation We take this difference into consideration when performing the analysis of our results This dataset includes tweets associated with the following five events Ferguson unrest Citizens of Ferguson in Michigan USA protested after the fatal shooting of an 18yearold African American Michael Brown by a white police officer on August 9 2014 Ottawa shooting Shootings occurred on Ottawas Parliament Hill in Canada resulting in the death of a Canadian soldier on October 22 2014 Sydney siege A gunman held as hostages ten customers and eight employees of a Lindt chocolate caf located at Martin Place in Sydney Australia on December 15 2014 Charlie Hebdo shooting Two brothers forced their way into the offices of the French satirical weekly newspaper Charlie Hebdo in Paris killing 11 people and wounding 11 more on January 7 2015 Germanwings plane crash A passenger plane from Barcelona to Dsseldorf crashed in the French Alps on March 24 2015 killing all passengers and crew on board The plane was ultimately found to have been deliberately crashed by the copilot of the plane In this case we perform 5fold crossvalidation having four events in the training set and the remaining event in the test set for each fold This section details the features and evaluation measures used in our experiments on tweet level stance classification We begin by describing the classifiers we use for our experimentation including Gaussian Processes as well as a set of competitive baseline classifiers that we use for comparison Gaussian Processes are a Bayesian nonparametric machine learning framework that has been shown to work well for a range of NLP problems often beating other stateoftheart methods BIBREF26 BIBREF27 BIBREF28 BIBREF29 A Gaussian Process defines a prior over functions which combined with the likelihood of data points gives rise to a posterior over functions explaining the data The key concept is a kernel function which specifies how outputs correlate as a function of the input Thus from a practitioners point of view a key step is to choose an appropriate kernel function capturing the similarities between inputs We use Gaussian Processes as this probabilistic kernelised framework avoids the need for expensive crossvalidation for hyperparameter selection Instead the marginal likelihood of the data can be used for hyperparameter selection The central concept of Gaussian Process Classification GPC BIBREF30 is a latent function f over inputs mathbf x fmathbf x sim mathcal GPmmathbf x kmathbf x mathbf xprime where m is the mean function assumed to be 0 and k is the kernel function specifying the degree to which the outputs covary as a function of the inputs We use a linear kernel kmathbf x mathbf xprime sigma 2 mathbf xtop mathbf xprime The latent function is then mapped by the probit function Phi f into the range 0 1 such that the resulting value can be interpreted as py1 mathbf x The GPC posterior is calculated as |
pf X mathbf y mathbf x int pf X mathbf x mathbf f fracpmathbf y mathbf fpmathbf fpmathbf yX dmathbf f |
where pmathbf ymathbf f displaystyle prod j1n Phi fjyj 1 Phi fj1yj is the Bernoulli likelihood of class y After calculating the above posterior from the training data this is used in prediction ie |
py 1X mathbf y mathbf x |
int Phi leftfrightpleftfX mathbf y mathbf xrightdf |
The above integrals are intractable and approximation techniques are required to solve them There exist various methods to deal with calculating the posterior here we use Expectation Propagation EP BIBREF31 In EP the posterior is approximated by a fully factorised distribution where each component is assumed to be an unnormalised Gaussian In order to conduct multiclass classification we perform a onevsall classification for each label and then assign the one with the highest likelihood amongst the three supporting denying questioning We choose this method due to interpretability of results similar to recent work on occupational class classification BIBREF29 In the LeavePartOut LPO setting initial labelled tweets from the target rumour are observed as well as opposed to the LeaveOneOut LOO setting In the case of LPO we propose to weigh the importance of tweets from the reference rumours depending on how similar their characteristics are to the tweets from the target rumour available for training To handle this with GPC we use a multiple output model based on the Intrinsic Coregionalisation Model ICM BIBREF32 This model has already been applied successfully to NLP regression problems BIBREF28 and it can also be applied to classification ones ICM parametrizes the kernel by a matrix which represents the extent of covariance between pairs of tasks The complete kernel takes form of |
kmathbf x d mathbf xprime dprime kdatamathbf x mathbf xprime Bd dprime |
where B is a square coregionalisation matrix d and dprime denote the tasks of the two inputs and kdata is a kernel for comparing inputs mathbf x and mathbf xprime here linear We parametrize the coregionalisation matrix Bkappa IvvT where v specifies the correlation between tasks and the vector mathbf kappa controls the extent of task independence Note that in case of LOO setting this model does not provide useful information since no target rumour data is available to estimate similarity to other rumours We tune hyperparameters mathbf v kappa and sigma 2 by maximizing evidence of the model pmathbf yX thus having no need for a validation set We consider GPs in three different settings varying in what data the model is trained on and what kernel it uses The first setting denoted GP considers only target rumour data for training The second GPPooled additionally considers tweets from reference rumours ie other than the target rumour The third setting is GPICM where an ICM kernel is used to weight influence from tweets from reference rumours To assess and compare the efficiency of Gaussian Processes for rumour stance classification we also experimented with five more baseline classifiers all of which were implemented using the scikit Python package BIBREF33 1 majority classifier which is a naive classifier that labels all the instances in the test set with the most common class in the training set 2 logistic regression MaxEnt 3 support vector machines SVM 4 naive bayes NB and 5 random forest RF The selection of these baselines is in line with the classifiers used in recent research on stance classification BIBREF23 who found that random forests followed by logistic regression performed best We conducted a series of preprocessing steps in order to address data sparsity All words were converted to lowercase stopwords have been removed all emoticons were replaced by words and stemming was performed In addition multiple occurrences of a character were replaced with a double occurrence BIBREF34 to correct for misspellings and lengthenings eg looool All punctuation was also removed except for and which we hypothesize to be important for expressing emotion Lastly usernames were removed as they tend to be rumourspecific ie very few users comment on more than one rumour After preprocessing the text data we use either the resulting bag of words BOW feature representation and replace all words with their Brown cluster ids Brown Brown clustering is a hard hierarchical clustering method BIBREF35 It clusters words based on maximizing the probability of the words under the bigram language model where words are generated based on their clusters In previous work it has been shown that Brown clusters yield better performance than directly using the BOW features BIBREF11 In our experiments the clusters used were obtained using 1000 clusters acquired from a large scale Twitter corpus BIBREF36 from which we can learn Brown clusters aimed at representing a generalisable Twitter vocabulary Retweets are removed from the training set to prevent bias BIBREF37 More details on the Brown clusters that we used as well as the words that are part of each cluster are available online During the experimentation process we also tested additional features including the use of the bag of words instead of the Brown clusters as well as using word embeddings trained from the training sets BIBREF38 However results turned out to be substantially poorer than those we obtained with the Brown clusters We conjecture that this was due to the little data available to train the word embeddings further exploring use of word embeddings trained from larger training datasets is left future work In order to focus on our main objective of proving the effectiveness of a multitask learning approach as well as for clarity purposes since the number of approaches to show in the figures increases if we also consider the BOW features we only show results for the classifiers relying on Brown clusters as features Accuracy is often deemed a suitable evaluation measure to assess the performance of a classifier on a multiclass classification task However the classes are clearly imbalanced in our case with varying tendencies towards one of the classes in each of the rumours We argue that in these scenarios the sole evaluation based on accuracy is insufficient and further measurement is needed to account for category imbalance This is especially necessary in our case as a classifier that always predicts the majority class in an imbalanced dataset will achieve high accuracy even if the classifier is useless in practice To tackle this we use both microaveraged and macroaveraged F1 scores Note that the microaveraged F1 score is equivalent to the wellknown accuracy measure while the macroaveraged F1 score complements it by measuring performance assigning the same weight to each category Both of the measures rely on precision Equation 50 and recall Equation 51 to compute the final F1 score textPrecisionk fractpktpkfpk Eq 50 textRecallk fractpktpkfnk Eq 51 where tpk true positives refer to the number of instances correctly classified in class k fpk is the number of instances incorrectly classified in class k and fnk is the number of instances that actually belong to class k but were not classified as such The above equations can be used to compute precision and recall for a specific class Precision and recall for all the classes in a problem with c classes are computed differently if they are microaveraged see Equations 52 and 53 or macroaveraged see Equations 54 and 55 textPrecisiontextmicro fracsum k 1c tpksum k 1c tpk sum k 1c fpk Eq 52 textRecalltextmicro fracsum k 1c tpksum k 1c tpk sum k 1c fnk Eq 53 After computing microaveraged and macroveraged precision and recall the final F1 score is computed in the same way ie calculating the harmonic mean of the precision and recall in question see Equation 56 textF1 frac2 times textPrecision times textRecalltextPrecision textRecall Eq 56 After computing the F1 score for each fold we compute the microaveraged score across folds First we look at the results on each dataset separately Then we complement the analysis by aggregating the results from both datasets which leads to further understanding the performance of our classifiers on rumour stance classification We show the results for the LOO and LPO settings in the same figure distinguished by the training size displayed in the X axis In all the cases labelled tweets from the remainder of the rumours rumours other than the testtarger rumour are used for training and hence the training size shown in the X axis is in addition to those Note that the training size refers to the number of labelled instances that the classifier is making use of from the target rumour Thus a training size of 0 indicates the LOO setting while training sizes from 10 to 50 pertain to the LPO setting Figure 1 and Table 4 show how microaveraged and macroaveraged F1 scores for the England riots dataset change as the number of tweets from the target rumour used for training increases We observe that as initially expected the performance of most of the methods improves as the number of labelled training instances from the target rumour increases This increase is especially remarkable with the GPICM method which gradually increases after having as few as 10 training instances GPICMs performance keeps improving as the number of training instances approaches 50 Two aspects stand out from analysing GPICMs performance It performs poorly in terms of microaveraged F1 when no labelled instances from the target rumour are used However it makes very effective use of the labelled training instances overtaking the rest of the approaches and achieving the best results This proves the ability of GPICM to make the most of the labelled instances from the target rumour which the rest of the approaches struggle with Irrespective of the number of labelled instances GPICM is robust when evaluated in terms of macroaveraged F1 This means that GPICM is managing to determine the distribution of classes effectively assigning labels to instances in the test set in a way that is better distributed than the rest of the classifier Despite the saliency of GPICM we notice that two other baseline approaches namely MaxEnt and RF achieve competitive results that are above the rest of the baselines but still perform worse than GPICM The results from the PHEME dataset are shown in Figure 2 and Table 5 Overall we can observe that results are lower in this case than they were for the riots dataset The reason for this can be attributed to the following two observations on the one hand each fold pertaining to a different event in the PHEME dataset means that the classifier encounters a new event in the classification where it will likely find new vocabulary which may be more difficult to classify on the other hand the PHEME dataset is more prominently composed of tweets that are replying to others which are likely shorter and less descriptive on their own and hence more difficult to get meaningful features from Despite the additional difficulty in this dataset we are interested in exploring if the same trend holds across classifiers from which we can generalise the analysis to different types of classifiers One striking difference with respect to the results from the riots dataset is that in this case the classifiers including GPICM are not gaining as much from the inclusion of labelled instances from the target rumour This is likely due to the heterogeneity of each of the events in the PHEME dataset Here a diverse set of rumourous newsworthy pieces of information are discussed pertaining to the selected events as they unfold By contrast each rumour in the riots dataset is more homogeneous as each rumour focuses on a specific story Interestingly when we compare the performance of different classifiers we observe that GPICM again outperforms the rest of the approaches both in terms of microaveraged and macroaveraged F1 scores While the microaveraged F1 score does not increase as the number of training instances increases we can see a slight improvement in terms of macroaveraged F1 This improvement suggests that GPICM does still take advantage of the labelled training instances to boost performance in this case by better distributing the predicted labels Again as we observed in the case of the riots dataset two baselines stand out MaxEnt and RF They are very close to the performance of GPICM for the PHEME dataset event outperforming it in a few occasions In the following subsection we take a closer look at the differences among the three classifiers We delve into the results of the bestperforming classifiers namely GPICM MaxEnt and RF looking at their perclass performance This will help us understand when they perform well and where it is that GPICM stands out achieving the best results Tables 6 and 7 show perclass F1 measures for the aforementioned three bestperforming classifiers for the England riots dataset and the PHEME dataset respectively They also show statistics of the misclassifications that the classifiers made in the form of percentage of deviations towards the other classes Looking at the perclass performance analysis we observe that the performance of GPICM varies when we look into Precision and Recall Still in all the datasetclass pairs GPICM performs best in terms of either Precision or Recall even though never in both Moreover it is generally the best in terms of F1 achieving the best Precision and Recall The only exception is with MaxEnt classifying questioning tweets more accurately in terms of F1 for the England riots When we look at the deviations we see that all the classifiers suffer from the datasets being imbalanced towards supporting tweets This results in all classifiers classifying numerous instances as supporting while they are actually denying or questioning This is a known problem in rumour diffusion as previous studies have found that people barely deny or question rumours but generally tend to support them irrespective of their actual veracity value BIBREF5 While we have found that GPICM can tackle the imbalance issue quite effectively and better than other classifiers this caveat posits the need for further research in dealing with the striking majority of supporting tweets in the context of rumours in social media Experimentation with two different approaches based on Gaussian Processes GP and GPICM and comparison with respect to a set of competitive baselines over two rumour datasets enables us to gain generalisable insight on rumour stance classification on Twitter This is reinforced by the fact that the two datasets are very different from each other The first dataset collected during the England riots in 2011 is a single event that we have split into folds each fold belonging to a separate rumour within the event hence all the rumours are part of the same event The second dataset collected within the PHEME project includes tweets for a set of five newsworthy events where each event has been assigned a separate fold therefore the classifier needs to learn from four events and test on a new unknown event which has proven more challenging Results are generally consistent across datasets which enables us to generalise conclusions well We observe that while GP itself does not suffice to achieve competitive results GPICM does instead help boost the performance of the classifier substantially to even outperform the rest of the baselines in the majority of the cases GPICM has proven to consistently perform well in both datasets despite their very different characteristics being competitive not only in terms of microaveraged F1 but also in terms of macroaveraged F1 GPICM manages to balance the varying class distributions effectively showing that its performance is above the rest of the baselines in accurately determining the distribution of classes This is very important in this task of rumour stance classification owing to the fact that even if a classifier that is 100 accurate is unlikely a classifier that accurately guesses the overall distribution of classes can be of great help If a classifier makes a good estimation of the number of denials in an aggregated set of tweets it can be useful to flag those potentially false rumours with high level of confidence Another factor that stands out from GPICM is its capacity to perform well when a few labelled instances of the target rumour are leveraged in the training phase GPICM effectively exploits the knowledge garnered from the few instances from the target rumour outperforming the rest of the baselines even when its performance was modest when no labelled instances were used from the target rumour In light of these results we deem GPICM the most competitive approach to use when one can afford to get a few instances labelled from the target rumour The labels from the target rumour can be obtained in practice in different ways 1 having someone inhouse eg journalists monitoring breaking news stories label a few instances prior to running the classifier 2 making use of resources for human computation such as crowdsourcing platforms to outsource the labelling work or 3 developing techniques that will attempt to classify the first few instances incorporating in the training set those for which a classification with high level of confidence has been produced The latter presents an ambitious avenue for future work that could help alleviate the labelling task On the other hand in the absence of labelled data from the target rumour which is the case of the LOO setting the effectiveness of the GPICM classifier is not as prominent For this scenario other classifiers such as MaxEnt and Random Forests have proven more competitive and one could see them as better options However we do believe that the remarkable difference that the reliance on the LPO setting produces is worth exploiting where possible Social media is becoming an increasingly important tool for maintaining social resilience individuals use it to express opinions and follow events as they unfold news media organisations use it as a source to inform their coverage of these events and government agencies such as the emergency services use it to gather intelligence to help in decisionmaking and in advising the public about how they should respond BIBREF1 While previous research has suggested that mechanisms for exposing false rumours are implicit in the ways in which people use social media BIBREF4 it is nevertheless critically important to explore if there are ways in which computational tools can help to accelerate these mechanisms so that misinformation and disinformation can be targeted more rapidly and the benefits of social media to society maintained BIBREF8 As a first step to achieving this aim this paper has investigated the problem of classifying the different types of stance expressed by individuals in tweets about rumours First we considered a setting where no training data from the target rumours is available LOO Without access to annotated examples of the target rumour the learning problem becomes very difficult We showed that in the supervised domain adaptation setting LPO even annotating a small number of tweets helps to achieve better results Moreover we demonstrated the benefits of a multitask learning approach as well as that Brown cluster features are more useful for the task than simple bag of words Findings from previous work such as BIBREF39 BIBREF4 have suggested that the aggregate stance of individual users is correlated with actual rumour veracity Hence the next step in our own work will be to make use of the classifier for the stance expressed in the reactions of individual Twitter users in order to predict the actual veracity of the rumour in question Another interesting direction for future work would be the addition of nontextual features to the classifier For example the rumour diffusion patterns BIBREF40 may be a useful cue for stance classification This work is partially supported by the European Union under grant agreement No 611233 Pheme The work was implemented using the GPy toolkit BIBREF41 This research utilised Queen Marys MidPlus computational facilities supported by QMUL ResearchIT and funded by EPSRC grant EPK0001281 </s> |
<s>[INST] Topic Spotting using Hierarchical Networks with Self Attention [/INST] 0pt1ex1ex 0pt1ex0ex 0pt05ex0ex Success of deep learning techniques have renewed the interest in development of dialogue systems However current systems struggle to have consistent long term conversations with the users and fail to build rapport Topic spotting the task of automatically inferring the topic of a conversation has been shown to be helpful in making a dialog system more engaging and efficient We propose a hierarchical model with self attention for topic spotting Experiments on the Switchboard corpus show the superior performance of our model over previously proposed techniques for topic spotting and deep models for text classification Additionally in contrast to offline processing of dialog we also analyze the performance of our model in a more realistic setting ie in an online setting where the topic is identified in real time as the dialog progresses Results show that our model is able to generalize even with limited information in the online setting Recently a number of commercial conversation systems have been introduced eg Alexa Google Assistant Siri Cortana etc Most of the available systems perform well on goaloriented conversations which spans over few utterances in a dialogue However with longer conversations in open domains existing systems struggle to remain consistent and tend to deviate from the current topic during the conversation This hinders the establishment of long term social relationship with the users BIBREF0 In order to have coherent and engaging conversations with humans besides other relevant natural language understanding NLU techniques BIBREF1 a system while responding should take into account the topic of the current conversation ie Topic Spotting Topic spotting has been shown to be important in commercial dialog systems BIBREF2 BIBREF3 directly dealing with the customers Topical information is useful for speech recognition systems BIBREF4 as well as in audio document retrieval systems BIBREF5 BIBREF6 Importance of topic spotting can be gauged from the work of Alexa team BIBREF7 who have proposed topic based metrics for evaluating the quality of conversational bots The authors empirically show that topic based metrics correlate with human judgments Given the importance of topical information in a dialog system this paper proposes self attention based hierarchical model for predicting topics in a dialog We evaluate our model on Switchboard SWBD corpus BIBREF8 and show that our model supersedes previously applied techniques for topic spotting We address the evaluative limitations of the current SWBD corpus by creating a new version of the corpus referred as SWBD2 We hope that SWBD2 corpus would provide a new standard for evaluating topic spotting models We also experiment with an online setting where we examine the performance of our topic classifier as the length of the dialog is varied and show that our model can be used in a real time dialog system as well Topic spotting is the task of detecting the topic of a dialog BIBREF5 Topic spotting has been an active area of research over the past few decades both in the NLP community as well as in the speech community In this section we briefly outline some of the main works in this area For a detailed survey of prior research in this area the reader is referred to BIBREF6 BIBREF6 Most of the methods proposed for topic spotting use features extracted from transcribed text as input to a classifier typically Nave Bayes or SVM Extracted features include Bag of Words BoW TFIDF BIBREF9 BIBREF10 ngrams and word cooccurrences BIBREF6 BIBREF11 Some approaches in addition to word cooccurrences features incorporate background world knowledge using Wikipedia BIBREF12 In our work we do not explicitly extract the features but learn these during training Moreover unlike previous approaches we explicitly model the dependencies between utterances via self attention mechanism and hierarchical structure Topic spotting has been explored in depth in the speech processing community see for example BIBREF13 BIBREF13 BIBREF14 BIBREF14 BIBREF15 BIBREF15 BIBREF16 BIBREF16 Researchers in this community have attempted to predict the topic directly from the audio signals using phoneme based features However the performance of word based models supersedes those of audio models BIBREF5 Recently there has been lot of work in deep learning community for text classification BIBREF17 BIBREF18 BIBREF19 BIBREF20 BIBREF21 These deep learning models use either RNNLSTM based neural networks BIBREF22 or CNN based neural networks BIBREF23 for learning representation of wordssentences We follow similar approach for topic spotting Our model is related to the Hierarchical Attention Network HNATT model proposed by BIBREF24 BIBREF24 for document classification HNATT models the document hierarchically by composing words with weights determined by first level of attention mechanism to get sentence representations and then combines the sentence representations with help of second level attention to get document representation which is then used for classification The aim of this paper is not to improve text classification but to improve topic spotting Topic spotting and text classification differ in various aspects We are among the first to show the use of hierarchical self attention HNSA model for topic spotting It is natural to consider applying text classification techniques for topic spotting However as we empirically show in this paper text classification techniques do not perform well in this setting Moreover for the dialog corpus simple BoW approaches perform better than more recently proposed HNATT model BIBREF24 We propose a hierarchical model with self attention HNSA for topic spotting We are given a topic label for each dialog and we want to learn a model mapping from space of dialogues to the space of topic labels We learn a prediction model by minimizing Negative Log Likelihood mathcal NLL of the data We propose a hierarchical architecture as shown in Figure 1 An utterance encoder takes each utterance in the dialog and outputs the corresponding utterance representation A dialog encoder processes the utterance representations to give a compact vector representation for the dialog which is used to predict the topic of the dialog Utterance Encoder Each utterance in the dialog is processed sequentially using single layer Bidirectional Long Short Term Memory BiLSTM BIBREF25 network and selfattention mechanism BIBREF26 to get the utterance representation In particular given an utterance with onehot encoding for the tokens uk lbrace mathbf wk1 wk2wkLrbrace each token is mapped to a vector mathbf vki mathbf E mathbf wki i12L using pretrained embeddings matrix mathbf E Utterance representation mathbf sk mathbf aT mathbf H1 is the weighted sum of the forward and backward direction concatenated hidden states at each step of the BiLSTM mathbf H1 mathbf h11mathbf hL1T where mathbf hi1 overrightarrowmathbf hi1overleftarrowmathbf hi1 mathbf BiLSTMmathbf vki The weights of the combination mathbf a textrm softmaxmathbf h2a are determined using selfattention mechanism proposed by BIBREF26 BIBREF26 by measuring the similarity between the concatenated hidden states mathbf h2a mathbf Wa2 mathbf h1a mathbf ba2 and mathbf h1a textrm tanh mathbf Wa1 mathbf H1 mathbf ba1 at each step in the utterance sequence Selfattention computes the similarity of a token in the context of an utterance and thus boosts the contribution of some keywords to the classifier It also mitigates the need for a second layer of attention at a dialog level reducing the number of parameters reducing the confusion of the classifier by not trying to reweigh individual utterances and reducing the dependence on having all utterances full future context for an accurate prediction A simple LSTM based model HN and HNATT perform worse than the model using self attention Experiments and Results indicating the crucial role played by selfattention mechanism Dialog Encoder Utterance embeddings representations are sequentially encoded by a second single layer BiLSTM to get the dialog representation mathbf hk2 overrightarrowmathbf hk2overleftarrowmathbf hk2 mathbf BiLSTMmathbf sk k12N Bidirectional concatenated hidden state corresponding to the last utterance ie last step of BiLSTM is used for making a prediction via a linear layer followed by softmax activation pmathsf T mathsf D textrm softmaxmathbf hD where mathbf hD mathbf Wf mathbf hN2 As in previous work Related Work we use Switchboard SWBD BIBREF8 corpus for training our model SWBD is a corpus of humanhuman conversations created by recording and later transcribing telephonic conversations between two participants who were primed with a topic Table 1 gives the corpus statistics Topics in SWBD range over a variety of domains for example politics health sports entertainment hobbies etc making the task of topic spotting challenging Dialogues in the test set of the original SWBD cover a limited number of topics 12 vs 66 The test set is not ideal for evaluating topic spotting system We address this shortcoming by creating a new split and we refer to this version of the corpus as SWBD2 The new split provides opportunity for more rigorous evaluation of a topic spotting system SWBD2 was created by removing infrequent topics 10 dialogues from the corpus and then randomly moving dialogues between the traindevelopment set and the test set in order to have instances of each topic in the test set The majority class baseline in SWBD2 is around 5 In transcribed SWBD corpus some punctuation symbols such as have special meanings and nonverbal sounds have been mapped to special symbols eg Laughter To preserve the meanings of special symbols we performed minimal preprocessing Dialog Corpora is different from text classification corpora eg product reviews If we roughly equate a dialog to a document and an utterance to a sentence dialogs are very long documents with short sentences Moreover the vocabulary distribution in a dialog corpus is fundamentally different eg presence of backchannel words like uhm and ah Model Hyperparameters We use GloVe embeddings BIBREF27 with dimensionality of 300 The embeddings are updated during training Each of the LSTM cell in the utterance and dialog encoder uses hidden state of dimension 256 The weight matrices in the attention network have dimension of 128 The hyperparameters were found by experimenting with the development set We trained the model by minimizing the crossentropy loss using Adam optimizer BIBREF28 with an initial learning rate of 0001 The learning rate was reduced by half when development set accuracy did not change over successive epochs Model took around 30 epochs to train We compare the performance of our model Table 2 with traditional Bag of Words BoW TFIDF and ngrams features based classifiers We also compare against averaged SkipGram BIBREF29 Doc2Vec BIBREF30 CNN BIBREF23 Hierarchical Attention HNATT BIBREF24 and hierarchical network HN models HN it is similar to our model HNSA but without any self attention Analysis As is evident from the experiments on both the versions of SWBD our model HNSA outperforms traditional feature based topic spotting models and deep learning based document classification models It is interesting to see that simple BoW and ngram baselines are quite competitive and outperform some of the deep learning based document classification model Similar observation has also been reported by BIBREF31 BIBREF31 for the task of sentiment analysis The task of topic spotting is arguably more challenging than document classification In the topic spotting task the number of output classes 6642 classes is much more than those in document classification 56 classes which is done mainly on the texts from customer reviews Dialogues in SWBD have on an average 200 utterances and are much longer texts than customer reviews Additionally the number of dialogues available for training the model is significantly lesser than customer reviews We further investigated the performance on SWBD2 by examining the confusion matrix of the model Figure 2 shows the heatmap of the normalized confusion matrix of the model on SWBD2 For most of the classes the classifier is able to predict accurately However the model gets confused between the classes which are semantically close wrt terms used to each other for example the model gets confused between pragmatically similar topics eg HOBBIES vs GARDENING MOVIES vs TV PROGRAMS RIGHT TO PRIVACY vs DRUG TESTING Online Setting In an online conversational system a topic spotting model is required to predict the topic accurately and as soon as possible during the dialog We investigated the relationship between dialog length in terms of number of utterances and accuracy This would give us an idea about how many utterances are required to reach a desirable level of accuracy For this experiment we varied the length of the dialogues from the test set that was available to the model for making prediction We created subdialogues of length starting with 132 of the dialog length and increasing it in multiples of 2 up to the full dialog Figure 2 shows both the absolute accuracy and the accuracy relative to that on the full dialog With just a few 3125 initial utterances available the model is already 72 confident about the topic This may be partly due to the fact that in a discussion the first few utterances explicitly talk about the topic However as we have seen since SWBD covers many different topics which are semantically close to each other but are assigned distinct classes it is equally challenging to predict the topic with the same model By the time the system has processed half the dialog in SWBD2 it is already within 99 accuracy of the full system The experiment shows the possibility of using the model in an online setting where the model predicts the topic with high confidence as the conversation progresses In this paper we presented a hierarchical model with self attention for topic spotting The model outperforms the conventional topic spotting techniques as well as deep learning techniques for text classification We empirically show that the proposed model can also be used in an online setting We also introduced a new version of SWBD corpus SWBD2 We hope that it will serve as the new standard for evaluating topic spotting models Moving forward we would like to explore a more realistic multimodal topic spotting system Such a system should fuse two modalities audio and transcribed text to make topic predictions We would like to thank anonymous reviewers for their insightful comments Mubbasir Kapadia has been funded in part by NSF IIS1703883 NSF SAS1723869 and DARPA SocialSimW911NF17C0098 </s> |
<s>[INST] Learning Personalized End-to-End Goal-Oriented Dialog [/INST] There has been growing research interest in training dialog systems with endtoend models BIBREF0 BIBREF1 BIBREF2 in recent years These models are directly trained on past dialogs without assumptions on the domain or dialog state structure BIBREF3 One of their limitations is that they select responses only according to the content of the conversation and are thus incapable of adapting to users with different personalities Specifically common issues with such contentbased models include i the inability to adjust language style flexibly BIBREF4 ii the lack of a dynamic conversation policy based on the interlocutors profile BIBREF5 and iii the incapability of handling ambiguities in user requests Figure FIGREF1 illustrates these problems with an example The conversation happens in a restaurant reservation scenario First the responses from the contentbased model are plain and boring and not able to adjust appellations and language styles like the personalized model Second in the recommendation phase the contentbased model can only provide candidates in a random order while a personalized model can change recommendation policy dynamically and in this case match the user dietary Third the word contact can be interpreted into phone or social media contact information in the knowledge base Instead of choosing one randomly the personalized model handles this ambiguity based on the learned fact that young people prefer social media account while the elders prefer phone number Psychologists have proven that during a dialog humans tend to adapt to their interlocutor to facilitate understanding which enhances conversational efficiency BIBREF6 BIBREF7 BIBREF8 To improve agent intelligence we may polish our model to learn such human behaviors in conversations A big challenge in building personalized dialog systems is how to utilize the user profile and generate personalized responses correspondingly To overcome it existing works BIBREF9 BIBREF4 often conduct extra procedures to incorporate personalization in training such as intermediate supervision and pretraining of user profiles which are complex and timeconsuming In contrast our work is totally endtoend In this paper we propose a Profile Model and a Preference Model to leverage user profiles and preferences The Profile Model learns user personalities with distributed profile representation and uses a global memory to store conversation context from other users with similar profiles In this way it can choose a proper language style and change recommendation policy based on the user profile To address the problem of ambiguity the Preference Model learns user preferences among ambiguous candidates by building a connection between the user profile and the knowledge base Since these two models are both under the MemN2N framework and make contributions to personalization in different aspects we combine them into the Personalized MemN2N Our experiments on a goaloriented dialog corpus the personalized bAbI dialog dataset show that leveraging personal information can significantly improve the performance of dialog systems The Personalized MemN2N outperforms current stateoftheart methods with over 7 improvement in terms of perresponse accuracy A test with real human users also illustrates that the proposed model leads to better outcomes including higher task completion rate and user satisfaction Endtoend neural approaches to building dialog systems have attracted increasing research interest It is well accepted that conversation agents include goaloriented dialog systems and non goaloriented chitchat bots Generative recurrent models like Seq2Seq have showed promising performance in non goaloriented chitchat BIBREF10 BIBREF11 BIBREF12 More recently retrievalbased models using a memory network framework have shown their potential in goaloriented systems BIBREF2 BIBREF3 Although steady progress has been made there are still issues to be addressed most existing models are contentbased which are not aware of the interlocutor profile and thus are not capable of adapting to different kinds of users Considerable research efforts have been devoted so far to make conversational agents smarter by incorporating user profile Personalized ChitChat The first attempt to model persona is BIBREF13 which proposes an approach to assign specific personality and conversation style to agents based on learned persona embeddings BIBREF14 describe an interesting approach that uses multitask learning with personalized text data There are some researchers attempting to introduce personalized information to dialogs by transfer learning BIBREF15 BIBREF16 Since there is usually no explicit personalized information in conversation context existing models BIBREF9 BIBREF4 often require extra procedures to incorporate personalization in training BIBREF9 add intermediate supervision to learn when to employ the user profile BIBREF4 pretrain the user profile with external service This work in contrast is totally endtoend A common approach to leveraging personality in these works is using a conditional language model as the response decoder BIBREF17 BIBREF13 This can help assign personality or language style to chitchat bots but it is useless in goaloriented dialog systems Instead of assigning personality to agents BIBREF13 BIBREF14 BIBREF9 our model pays more attention to the user persona and aims to make agents more adaptive to different kinds of interlocutors Personalized GoalOriented Dialog As most previous works BIBREF13 BIBREF18 BIBREF9 focus on chitchat the combination of personalization and goaloriented dialog remains unexplored Recently a new dataset has been released that enriches research resources for personalization in chitchat BIBREF19 However no open dataset allows researchers to train goaloriented dialog with personalized information until the personalized bAbI dialog corpus released by BIBREF5 Our work is in the vein of the memory network models for goaloriented dialog from BIBREF2 and BIBREF3 We enrich these models by incorporating the profile vector and using conversation context from users with similar attributes as global memory Since we construct our model based on the MemN2N by BIBREF3 we first briefly recall its structure to facilitate the delivery of our models The MemN2N consists of two components context memory and next response prediction As the model conducts a conversation with the user utterance from the user and response from the model are in turn appended to the memory At any given time step INLINEFORM0 there are INLINEFORM1 user utterances and INLINEFORM2 model responses The aim at time INLINEFORM3 is to retrieve the next response INLINEFORM4 Memory Representation Following BIBREF20 we represent each utterance as a bagofwords using the embedding matrix INLINEFORM0 and the context memory INLINEFORM1 is represented as a vector of utterances as DISPLAYFORM0 where INLINEFORM0 maps the utterance to a bag of dimension INLINEFORM1 the vocabulary size and INLINEFORM2 is a INLINEFORM3 matrix in which INLINEFORM4 is the embedding dimension So far information of which speaker spoke an utterance and at what time during the conversation are not included in the contents of memory We therefore encode those pieces of information in the mapping INLINEFORM0 by extending the vocabulary to contain INLINEFORM1 extra time features which encode the index INLINEFORM2 of an utterance into the bagofwords and two more features INLINEFORM3 INLINEFORM4 encoding whether the speaker is the user or the bot The last user utterance INLINEFORM0 is encoded into INLINEFORM1 which also denotes the initial query at time INLINEFORM2 using the same matrix INLINEFORM3 Memory Operation The model first reads the memory to find relevant parts of the previous conversation for responses selection The match between INLINEFORM0 and the memory slots is computed by taking the inner product followed by a softmax INLINEFORM1 which yields a vector of attention weights Subsequently the output vector is constructed by INLINEFORM2 where INLINEFORM3 is a INLINEFORM4 square matrix In a multilayer MemN2N framework the query is then updated with INLINEFORM5 Therefore the memory can be iteratively reread to look for additional pertinent information using the updated query INLINEFORM6 instead of INLINEFORM7 and in general using INLINEFORM8 on iteration INLINEFORM9 with a fixed number of iterations INLINEFORM10 termed INLINEFORM11 hops Let INLINEFORM0 where INLINEFORM1 is another word embedding matrix and INLINEFORM2 is a large set of candidate responses which includes all possible bot utterances and API calls The final predicted response distribution is then defined as DISPLAYFORM0 where there are INLINEFORM0 candidate responses in INLINEFORM1 We first propose two personalized models The Profile Model introduces the personality of the interlocutor explicitly using profile embedding and implicitly using global memory The Preference Model models user preferences over knowledge base entities The two models are independent to each other and we also explore their combination as the Personalized MemN2N Figure FIGREF8 shows the structure of combined model The different components are labeled with dashed boxes separately The user profile representation is defined as follows Each interlocutor has a user profile represented by INLINEFORM0 attributes INLINEFORM1 where INLINEFORM2 and INLINEFORM3 denote the key and value of the INLINEFORM4 th attribute respectively Take the user in the first dialog in Figure FIGREF1 as an example the representation should be INLINEFORM5 The INLINEFORM6 th profile attribute is represented as a onehot vector INLINEFORM7 where there are INLINEFORM8 possible values for key INLINEFORM9 We define the user profile INLINEFORM10 as the concatenation of onehot representations of attributes INLINEFORM11 where INLINEFORM12 The notations of the memory network are the same as introduced in Section SECREF3 Our first model is the Profile Model which aims to integrate personalized information into the query and ranking part of the MemN2N The model consists of two different components profile embedding and global memory Profile Embedding In the MemN2N the query INLINEFORM0 plays a key role in both reading memory and choosing the response while it contains no information about the user We expect to add a personalized information term to INLINEFORM1 at each iteration of the query Then the model can be aware of the user profile in the steps of searching relevant utterances in the memory and selecting the final response from the candidates We thus obtain a distributed profile representation INLINEFORM2 by applying a linear transformation with the onehot user profile INLINEFORM3 where INLINEFORM4 Note that this distributed profile representation shares the same embedding dimension INLINEFORM5 with the bagofwords The query update equation can be changed as DISPLAYFORM0 where INLINEFORM0 and INLINEFORM1 are the query and output at the INLINEFORM2 th hop respectively Also the likelihood of a candidate being selected should be affected directly by the user profile no matter what the query is Therefore we obtain tendency weights by computing the inner product between INLINEFORM0 and candidates followed by a sigmoid and revise the candidates accordingly DISPLAYFORM0 where INLINEFORM0 is a sigmoid The prediction INLINEFORM1 is then computed by Equation EQREF5 using INLINEFORM2 instead of INLINEFORM3 Global Memory Users with similar profiles may expect the same or a similar response for a certain request Therefore instead of using the profile directly we also implicitly integrate personalized information of an interlocutor by utilizing the conversation history from similar users as a global memory The definition of similarity varies with task domains In this paper we regard those with the same profile as similar users As shown in Figure FIGREF8 the global memory component has an identical structure as the original MemN2N The difference is that the contents in the memory are history utterances from other similar users instead of the current conversation Similarly we construct the attention weights output vector and iteration equation by DISPLAYFORM0 where INLINEFORM0 denotes the global memory INLINEFORM1 is the attention weight over the global memory INLINEFORM2 is a INLINEFORM3 square matrix INLINEFORM4 is the intermediate output vector and INLINEFORM5 is the result at the INLINEFORM6 th iteration Lastly we use INLINEFORM7 instead of INLINEFORM8 to make the following computation The Profile Model has not yet solved the challenge of handling the ambiguity among KB entities such as the choice between phone and social media in Figure FIGREF1 The ambiguity refers to the user preference when more than one valid entities are available for a specific request We propose inferring such preference by taking the relation between user profile and knowledge base into account Assuming we have a knowledge base that describes the details of several items where each row denotes an item and each column denotes one of their corresponding properties The entity INLINEFORM0 at row INLINEFORM1 and column INLINEFORM2 is the value of the INLINEFORM3 th property of item INLINEFORM4 The Preference Model operates as follows Given a user profile and a knowledge base with INLINEFORM0 columns we predict the users preference on different columns We first model the user preference INLINEFORM1 as DISPLAYFORM0 where INLINEFORM0 Note that we assume the bot cannot provide more than one option in a single response so a candidate can only contains one entity at most The probability of choosing a candidate response should be affected by this preference if the response mentions one of the KB entities We add a bias term INLINEFORM0 to revise the logits in Equation EQREF5 The bias for INLINEFORM1 th candidate INLINEFORM2 is constructed as the following steps If the INLINEFORM3 th candidate contains no entity then INLINEFORM4 if the candidate contains an entity INLINEFORM5 which belongs to item INLINEFORM6 then INLINEFORM7 where given the current conversation context INLINEFORM8 DISPLAYFORM0 For example the candidate Here is the information ThePlacePhone contains a KB entity ThePlacePhone which belongs to restaurant ThePlace and column Phone If ThePlace has been mentioned in the conversation the bias term for this response should be INLINEFORM0 We update the Equation EQREF5 to DISPLAYFORM0 As discussed previously the Profile Model and the Preference Model make contributions to personalization in different aspects The Profile Model enables the MemN2N to change the response policy based on the user profile but fails to establish a clear connection between the user and the knowledge base On the other hand the Preference Model bridges this gap by learning the user preferences over the KB entities To take advantages of both models we construct a general Personalized MemN2N model by combining them together as shown in Algorithm SECREF16 All these models are trained to minimize a standard crossentropy loss between INLINEFORM0 and the true label INLINEFORM1 Response Prediction by Personalized MemN2N Input User utterance INLINEFORM0 Context memory INLINEFORM1 global memory INLINEFORM2 candidates INLINEFORM3 and user profile INLINEFORM4 Output The index INLINEFORM0 of the next response 1 Predict INLINEFORM1 INLINEFORM2 Profile embedding INLINEFORM3 INLINEFORM0 hops INLINEFORM1 INLINEFORM2 INLINEFORM3 INLINEFORM0 INLINEFORM1 INLINEFORM2 INLINEFORM0 INLINEFORM1 Bias term INLINEFORM2 Final query INLINEFORM3 Revised candidates INLINEFORM4 INLINEFORM5 The personalized bAbI dialog dataset BIBREF5 is a multiturn dialog corpus extended from the bAbI dialog dataset BIBREF3 It introduces an additional user profile associated with each dialog and updates the utterances and KB entities to integrate personalized style Five separate tasks in a restaurant reservation scenario are introduced along with the dataset Here we briefly introduce them for better understanding of our experiments More details on the dataset can be found in the work by BIBREF5 Task 1 Issuing API Calls Users make queries that contain several blanks to fill in The bot must ask proper questions to fill the missing fields and make the correct API calls Task 2 Updating API Calls Users may update their request and the bot must change the API call accordingly Task 3 Displaying Options Given a user request the KB is queried and the returning facts are added to the dialog history The bot is supposed to sort the options based on how much users like the restaurant The bot must be conscious of the user profile and change the sorting strategy accordingly to accomplish this task Task 4 Providing Information Users ask for some information about a restaurant and more than one answer may meet the requirement ie contact withrespectto social media account and phone number The bot must infer which answer the user prefers based on the user profile Task 5 Full Dialog This task conducts full dialog combining all the aspects of Tasks 1 to 4 The difficulties of personalization in these tasks are not incremental In Tasks 1 and 2 the bot is only required to select responses with appropriate meaning and language style In Tasks 3 and 4 the knowledge base is supposed to be searched which makes personalization harder In these two tasks apart from capturing shallow personalized features in the utterances such as language style the bot also has to learn different searching or sorting strategies for different user profiles In Task 5 we expect an average performance utterancewise since it combines the other four tasks There are two variations of dataset provided for each task a full set with around 6000 dialogs and a small set with only 1000 dialogs to create realistic learning conditions We get the dataset released on ParlAI We consider the following baselines Supervised Embedding Model a strong baseline for both chitchat and goaloriented dialog BIBREF20 BIBREF3 Memory Network the MemN2N by BIBREF3 which has been described in detail in Section SECREF3 We add the profile information as an utterance said by the user at the beginning of each dialog In this way the standard MemN2N may capture the user persona to some extent Split Memory Network the model proposed by BIBREF5 that splits the memory into two parts profile attributes and conversation history The various attributes are stored as separate entries in the profile memory before the dialog starts and the conversation memory operates the same as the MemN2N The parameters are updated by Nesterov accelerated gradient algorithm BIBREF21 and initialized by Xavier initializer We try different combinations of hyperparameters and find the best settings as follows The learning rate is INLINEFORM0 and the parameter of momentum INLINEFORM1 is INLINEFORM2 Gradients are clipped to avoid gradient explosion with a threshold of 10 We employ earlystopping as a regularization strategy Models are trained in minibatches with a batch size of 64 The dimensionality of wordprofile embeddings is 128 We set the maximum context memory and global memory size ie number of utterances as 250 and 1000 separately We pad zeros if the number of utterances in a memory is less than 250 or 1000 otherwise we keep the last 250 utterances for the context memory or randomly choose 1000 valid utterances for the global memory Following BIBREF5 we report perresponse accuracy across all models and tasks on the personalized bAbI dataset in Table TABREF18 The perresponse accuracy counts the percentage of correctly chosen candidates Rows 4 to 6 of Table TABREF18 show the evaluation results of the Profile Model As reported in BIBREF5 their personalized dialogs model might be too complex for some simple tasks such as Tasks 1 and 2 which do not rely on KB facts and tends to overfit the training data It is reflected in the failure of the split memory model on Tasks 1 and 2 Although it outperforms the standard MemN2N in some complicated tasks the latter one is good enough to capture the profile information given in a simple raw text format and defeats the split memory model in simpler tasks To overcome such a challenge we avoid using excessively complex structures to model the personality Instead we only represent the profile as an embedding vector or implicitly As expected both profile embedding and global memory approach accomplish Tasks 1 and 2 with a very high accuracy and also notably outperform the baselines in Task 3 which requires utilizing KB facts along with the profile information Also the performance of combining the two components together as shown in row 6 is slightly better than using them independently The result suggests that we can take advantages of using profile information in an explicit and implicit way in the meantime Since the Profile Model does not build a clear connection between the user and the knowledge base as discussed in Section SECREF4 it may not solve ambiguities among the KB columns The experiment results are consistent with this inference the performance of the Profile Model on Task 4 which requires user request disambiguation is particularly close to the baselines Row 7 shows the evaluation results of the Preference Model which is proposed to handle the above mentioned challenge The model achieves significant improvements on Task 4 by introducing the bias term derived from the learned user preference Besides the restaurant sorting challenge in Task 3 depends on the properties of a restaurant to some extent Intuitively different properties of the restaurants are weighted differently and the user preference over the KB columns can be considered as scoring weights which is useful for tasksolving As a result the model also improves the performance in Task 3 compared to the standard MemN2N We test the performance of the combined Personalized MemN2N as well As we have analyzed in Section SECREF4 the Profile Model and the Preference Model make contributions to personalization in different aspects and their combination has the potential to take advantages of both models Experiment results confirm our hypothesis that the combined model achieves the best performance with over 7 and 9 on small sets improvement over the best baseline for the full dialog task Task 5 As the proposed Personalized MemN2N achieves better performance than previous approaches we conduct an analysis to gain further insight on how the integration of profile and preference helps the response retrieval Since we use the learned profile embeddings to obtain tendency weights for candidates selection as is illustrated in Equation EQREF10 we expect to observe larger weights on candidates that correctly match the profile For instance given a profile Gender Male Age Young we can generate a weight for each response candidate Due to the fact that candidates are collected from dialogs with different users they can be divided based on the user profile Those candidates in the group of young male should have larger weights than others We group the candidates by their corresponding user profile For each profile we generate tendency weights and collect the average value for each group Figure FIGREF27 visualizes the results by a confusion matrix The weights on the diagonal are significantly larger than others which demonstrates the contribution of profile embeddings in candidate selection To better illustrate how much the global memory impacts the performance of the proposed model we conduct a control experiment Specifically we build a model with the same global memory component as described in Section SECREF7 but the utterances in the memory are from randomly chosen users rather than similar users We report the results of the control experiment on Task 5 in Table TABREF29 The numbers indicate that the global memory does help improve the performance Remember that we use a preference vector INLINEFORM0 to represent the users preference over the columns in the knowledge base Therefore we investigate the learned arguments grouped by profile attributes As seen in Figure FIGREF31 the model successfully learns the fact that young people prefer social media as their contact information while middleaged and elderly people prefer phone number The result shows great potential and advantage of endtoend models They are capable of learning meaningful intermediate arguments while being much simpler than existing reinforcement learning methods and pipeline models for the task of personalization in dialogs To demonstrate the effectiveness of the personalization approach over standard models more convincingly we build an interactive system based on the proposed model and baselines and conduct a human evaluation Since it is impractical to find testers with all profiles we need we randomly build 20 profiles with different genders ages and preferences and ask three judges to act as the given roles They talk to the system and score the conversations in terms of task completion rate and satisfaction Task completion rate stands for how much the system accomplish the users goal Satisfaction refers to whether the responses are appropriate to the user profile The scores are averaged and range from 0 to 1 0 is the worst and 1 is perfect We find that Personalized MemN2N wins the MemN2N baseline with INLINEFORM0 and INLINEFORM1 higher in terms of task completion rate and satisfaction respectively with INLINEFORM2 We introduce a novel endtoend model for personalization in goaloriented dialog Experiment results on open datasets and further analysis show that the model is capable of overcoming some existing issues in dialog systems The model improves the effectiveness of the bot responses with personalized information and thus greatly outperforms stateoftheart methods In future work more representations of personalities apart from the profile attribute can be introduced into goaloriented dialogs models Besides we may explore on learning profile representations for nondomainspecific tasks and consider KB with more complex format such as ontologies We thank all reviewers for providing the constructive suggestions Also thanks to Danni Liu Haoyan Liu and Yuanhao Xiong for the helpful discussion and proofreading Xu Sun is the corresponding author of this paper </s> |
<s>[INST] Determining the Scale of Impact from Denial-of-Service Attacks in Real Time Using Twitter [/INST] Denial of Service attacks are explicit attempts to stop legitimate users from accessing specific network systems BIBREF0 Attackers try to exhaust network resources like bandwidth or server resources like CPU and memory As a result the targeted system slows down or becomes unusable BIBREF1 Online service providers like Bank Of America Facebook and Reddit are often the target of such attacks and the frequency and scale of those attacks has increased rapidly in recent years BIBREF2 To address this problem there is ample previous work on methods to detect and handle Denial of Service attacks especially Distributed Denial of Service attacks DWARD BIBREF3 is a scheme that tries to locate a DDoS attacks at the source by monitoring inbound and outbound traffic of a network and comparing it with predefined normal values Some IP Traceback mechanisms BIBREF4 were developed to trace back to the attack source from the victims end Still other methods try to deploy a defensive scheme in an entire network to detect and respond to an attack at intermediate subnetworks Watchers BIBREF5 is an example of this approach Despite all the new models and techniques to prevent or handle cyber attacks DDoS attacks keep evolving Services are still being attacked frequently and brought down from time to time After a service is disrupted it is crucial for the provider to assess the scale of the outage impact In this paper we present a novel approach to solve this problem No matter how complex the network becomes or what methods the attackers use a denial of service attack always results in legitimate users being unable to access the network system or slowing down their access and they are usually willing to reveal this information on social media plaforms Thus legitimate user feedback can be a reliable indicator about the severity level of the service outage Thus we split this problem into two parts namely by first isolating the tweet stream that is likely related to a DoS attack and then measuring the impact of attack by analyzing the extracted tweets A central challenge to measure the impact is how to figure out the scale of the effect on users as soon as possible so that appropriate action can be taken Another difficulty is given the huge number of users of a service how to effectively get and process the user feedback With the development of Social Networks especially micro blogs like Twitter users post many life events in real time which can help with generating a fast response Another advantage of social networks is that they are widely used Twitter claims that they had 313 million monthly active users in the second quarter of 2016 BIBREF6 This characteristic will enlarge the scope of detection and is extremely helpful when dealing with cross domain attacks because tweets from multiple places can be leveraged The large number of users of social networks will also guarantee the sensitivity of the model However because of the large number of users a huge quantity of tweets will be generated in a short time making it difficult to manually annotate the tweets which makes unsupervised or weaklysupervised models much more desirable In the Twitter data that we collected there are three kinds of tweets Firstly are tweets that are actually about a cyberattack For example someone tweeted Cant sign into my account for bank of America after hackers infiltrated some accounts on September 19 2012 when a attack on the website happened Secondly are tweets about some random complaints about an entity like Death to Bank of America RIP my Hello Kitty card which also appeared on that day Lastly are tweets about other things related to the bank For example another tweet on the same day is Should iget an account with bank of america or welsfargo To find out the scale of impact from an attack we must first pick out the tweets that are about the attack Then using the ratio and number of attack tweets an estimation of severity can be generated To solve the problem of detecting Denial of Service attacks from tweets we constructed a weaklysupervised Natural Language Processing NLP based model to process the feeds More generally this is a new event detection model We hypothesize that new topics are attack topics The hypothesis would not always hold and this issue will be handled by a later module The first step of the model is to detect topics in one time window of the tweets using Latent Dirichlet Allocation BIBREF7 Then in order to get a score for each of the topics the topics in the current time window are compared with the topics in the previous time window using Symmetric KullbackLeibler Divergence KL Divergence BIBREF8 After that a score for each tweet in the time window is computed using the distribution of topics for the tweet and the score of the topics Were looking for tweets on new topics through time While the experiments show promising results precision can be further increased by adding a layer of a supervised classifier trained with attack data at the expense of recall Following are the contributions in this paper A dataset of annotated tweets extracted from Twitter during DoS attacks on a variety organizations from differing domains such as banking like Bank Of America and technology A weaklysupervised approach to identifying detect likely DoS service related events on twitter in realtime A score to measure impact of the DoS attack based on the frequency of user complaints about the event The rest of this paper is organized as follows In section 2 previous work regarding DDoS attack detection and new event detection will be discussed In section 3 we describe the how the data was collected We also present the model we created to estimate the impact of DDoS attacks from Twitter feeds In section 4 the experiments are described and the results are provided In section 5 we discuss some additional questions Finally section 6 concludes our paper and describes future work Denial of Service DoS attacks are a major threat to Internet security and detecting them has been a core task of the security community for more than a decade There exists significant amount of prior work in this domain BIBREF9 BIBREF10 BIBREF11 all introduced different methods to tackle this problem The major difference between this work and previous ones are that instead of working on the data of the network itself we use the reactions of users on social networks to identify an intrusion Due to the widespread use of social networks they have become an important platform for realworld event detection in recent years BIBREF12 BIBREF13 defined the task of new event detection as identifying the first story on topics of interest through constantly monitoring news streams Atefeh et al BIBREF14 provided a comprehensive overview of event detection methods that have been applied to twitter data We will discuss some of the approaches that are closely related to our work Weng et al BIBREF15 used a waveletsignal clustering method to build a signal for individual words in the tweets that was dependent high frequency words that repeated themselves The signals were clustered to detect events Sankaranarayanan et al BIBREF16 presented an unsupervised news detection method based on naive Bayes classifiers and online clustering BIBREF17 described an unsupervised method to detect general new event detection using Hierarchical divisive clustering Phuvipadawat et al BIBREF18 discussed a pipeline to collect cluster rank tweets and ultimately track events They computed the similarity between tweets using TFIDF The Stanford Named Entity Recognizer was used to identify nouns in the tweets providing additional features while computing the TFIDF score Petrovi et al BIBREF19 tried to detect events on a large web corpus by applying a modified locality sensitive hashing technique and clustering documents tweets together Benson et al BIBREF20 created a graphical model that learned a latent representation for twitter messages ultimately generating a canonical value for each event Tweetscan BIBREF21 was a method to detect events in a specific geolocation After extracting features such as name time and location from the tweet the method used DBSCAN to cluster the tweets and Hierarchical Dirichlet Process to model the topics in the tweets Badjatiya et al BIBREF22 applied deep neural networks to detect events They showed different architectures such as Convolutional Neural Networks CNN Recurrent Neural Networks LSTM based and FastText outperform standard ngram and TFIDF models Burel et al BIBREF23 created a DualCNN that had an additional channel to model the named entities in tweets apart from the pretrained word vectors from GloVe BIBREF24 or Word2Vec BIBREF25 Thus most event detection models can be grouped into three main categories of methods ie TFIDF based methods approaches that model topics in tweets and deep neural network based algorithms One of the main challenges against applying a neural network model is the the requirement of a large annotated corpus of tweets Our corpus of tweets is comparatively small Hence we build our pipeline by modeling the topics learned from tweets The previous work that is most similar to ours was BIBREF26 We both used Latent Dirichlet Allocation LDA to get the topics of the document the difference was they only run LDA on the hashtag of the tweets while we try to get the topics in the tweets by running it on the whole document Latent Dirichlet Allocation BIBREF7 was a method to get topics from a corpus In our work we used the technique to acquire the values of some of the variables in our equation A variation of it Hierarchically Supervised Latent Dirichlet Allocation BIBREF27 was used in the evaluation Figure FIGREF4 outlines the entire pipeline of the model from preprocessing tweets to modeling them and finally detecting ranking future tweets that are related to a DoS issue and measuring its severity To collect the tweets we first gathered a list of big DDoS attacks happened from 2012 to 2014 Then for each attack on the list we collected all the tweets from one week before the attack to the attack day that contains the name of the entity attacked The following preprocessing procedure were applied to the corpus of tweets Remove all the metadata like time stamp author and so on These metadata could provide useful information but only the content of the tweet was used for now Lowercase all the text Use an English stop word list to filter out stop words The last two steps are commonly used technique when preprocessing text Now we try to find out a quantitative representation of the corpus To do that the preprocessed tweets about one attack will be divided into two groups One is on the attack day and the other is the tweets one week before it The first set will be called Da and the other one Db This step will create two separate LDA models for Da and Db using the Genism library BIBREF28 The first Model will be called Ma and the other one Mb Latent Dirichlet allocation LDA is a generative probabilistic topic modeling model Figure FIGREF11 is its plate notation The meaning of different parameters M N alpha beta theta z and w is also described there We used the LDA algorithm implemented by the Gensim library One of the most important parameters of the LDA algorithm is the number of topics Nt in the corpus To determine that we introduced the following formula where Nd is the number of tweets in the corpus alpha is a constant and we used alpha 10 in our experiments The logic behind the equation is discussed in section 5 Then we want to find out how the new topics are different from the history topics or in other words how topics in Ma differ from topics in Mb We define the Symmetric KullbackLeibler divergence for topic Tj in Model Ma as Where n is the number of topics in Model Mb Tmprime is the mth topic in Model Mb and Dkl XY is the original KullbackLeibler Divergence for discrete probability distributions which defined as Where Xi and Yi are the probability of token i in topics X and Y respectively This is similar to the JensenShannon divergence So for each topic Tj in Model Ma its difference to topics in Mb is determined by its most similar topic in Mb The topics from the attack day model Ma are ranked by their Symmetric KullbackLeibler divergence to topics from the nonattack day model Mb An example of selected attack topics is provided in section 43 This subsection is about how to find specific tweets that are about a network attack The tweets are selected based on the relative score S The score for tweet ti is defined as Where n is the number of topics on the attack day Pij is the probability that topic j appears in tweet ti in the attack day LDA model and SKLj is the Symmetric KullbackLeibler divergence for topic j The higher the score the more likely it is related to an attack event Because annotated data is not needed the model we described before can be regarded as a weaklysupervised model to detect new events on twitter in a given time period To label tweets as attack tweets one assumption must be true which is that the new event in that time period is a cyber attack Unfortunately that is usually not true Thus an optional classifier layer can be used to prevent false positives By using a decision tree model we want to find out whether the weaklysupervised part of the model can simplify the problem enough that a simple classification algorithm like a decision tree can have a good result Additionally it is easy to find out the reasoning underline a decision tree model so that we will know what the most important features are The decision tree classifier is trained on the bag of words of collected tweets and the labels are manually annotated We limit the minimum samples in each leaf to be no less than 4 so that the tree wont overfit Other than that a standard Classification and Regression Tree CART BIBREF29 implemented by scikitlearn BIBREF30 was used The classifier was only trained on the training set tweets about Bank of America on 09192012 so that the test results do not overestimate accuracy The definition of severity varies from different network services and should be studied case by case For the sake of completeness we propose this general formula In the equation above beta is a parameter from 0 to 1 which determines the weight of the two parts Nattack is the number of attack tweets found Nall means the number of all tweets collected in the time period And Nuser is the number of twitter followers of the network service An interesting future work is to find out the quantitative relation between SeverityLevel score and the size of the actual DDoS attack In this section we experimentally study the proposed attack tweet detection models and report the evaluation results We used precision and recall for evaluation Precision Out of all of the tweets that are marked as attack tweets the percentage of tweets that are actually attack tweets Or true positive over true positive plus false positive Recall Out of all of the actual attack tweets the percentage of tweets that are labeled as attack tweets Or true positive over true positive plus false negative We collected tweets related to five different DDoS attacks on three different American banks For each attack all the tweets containing the banks name posted from one week before the attack until the attack day were collected There are in total 35214 tweets in the dataset Then the collected tweets were preprocessed as mentioned in the preprocessing section The following attacks were used in the dataset Bank of America attack on 09192012 Wells Fargo Bank attack on 09192012 Wells Fargo Bank attack on 09252012 PNC Bank attack on 09192012 PNC Bank attack on 09262012 Only the tweets from the Bank of America attack on 09192012 were used in this experiment The tweets before the attack day and on the attack day were used to train the two LDA models mentioned in the approach section The top bottom 4 attack topics and their top 10 words are shown in table 1 and 2 As shown in table 1 there are roughly 4 kinds of words in the attack topics First is the name of the entity we are watching In this case it is Bank of America Those words are in every tweet so they get very high weight in the topics while not providing useful information Those words can be safely discarded or added to the stop word list The second type of words are general cybersecurity words like website outage hackers slowdown and so on Those words have the potential to become an indicator When topics with those words appears it is likely that there exists an attack The third kind are words related to the specific attack but not attacks in general Those words can provide details about the attack but it is hard to identify them without reading the full tweets In our example the words movie and sacrilegious are in this group That is because the DDoS attack on Bank of America was in response to the release of a controversial sacrilegious film The remaining words are nonrelated words The higher the weights of them in a topic the less likely the topic is actually about a DDoS attack The results showed that except the 3rd topic the top 4 topics have high weight on related words and the number of the forth type of words are smaller than the first three types of words There are no high weight words related to security in the bottom 4 topics We can say that the high SKL topics are about cyber attacks In this subsection we discuss the experiment on the attack tweets found in the whole dataset As stated in section 33 the whole dataset was divided into two parts Da contained all of the tweets collected on the attack day of the five attacks mentioned in section 42 And Db contained all of the tweets collected before the five attacks There are 1180 tweets in Da and 7979 tweets in Db The tweets on the attack days Da are manually annotated and only 50 percent of those tweets are actually about a DDoS attack The 5 tweets that have the highest relative score in the dataset are jiwa mines and miner us bancorp pnc latest bank websites to face access issues reuters some us bancorp httpbitlyp5xpmz us bancorp pnc latest bank websites to face access issues reuters some us bancorp and pnc financial pncvwallet nothing pnc sucks fat d ur lucky theres 3 pncs around me or your bitchassness wouldnt have my money business us bancorp pnc latest bank websites to face access issues reuters news forex business us bancorp pnc latest bank websites to face access issues httpdlvrit2d9ths The precision when labeling the first x ranked tweets as attack tweet is shown in the figure FIGREF39 The xaxis is the number of ranked tweets treated as attack tweets And the yaxis is the corresponding precision The straight line in figures FIGREF39 FIGREF43 and FIGREF51 is the result of a supervised LDA algorithm which is used as a baseline Supervised LDA achieved 9644 percent precision with 10 fold cross validation The result shows that if the model is set to be more cautious about labeling a tweet as an attack tweet a small x value higher precision even comparable to supervised model can be achieved However as the x value increases the precision drops eventually Figure FIGREF40 shows the recall of the same setting We can find out that the recall increases as the model becomes more bold at the expense of precision Figure FIGREF41 is the detection error tradeoff graph to show the relation between precision and recall more clearly missed detection rate is the precision In this subsection we evaluate how good the model generalizes To achieve that the dataset is divided into two groups one is about the attacks on Bank of America and the other group is about PNC and Wells Fargo The only difference between this experiment and the experiment in section 44 is the dataset In this experiment setting Da contains only the tweets collected on the days of attack on PNC and Wells Fargo Db only contains the tweets collected before the Bank of America attack There are 590 tweets in Da and 5229 tweets in Db In this experiment we want to find out whether a model trained on Bank of America data can make good classification on PNC and Wells Fargo data Figures FIGREF43 and FIGREF44 will show the precision and recall of the model in this experiment setting A detection error tradeoff graph Figure FIGREF45 is also provided The result is similar to the whole dataset setting from the previous section The smaller the x value is the higher the precision and lower the recall vice versa The precision is also comparable to the supervised model when a small x is chosen This shows that the model generalized well Using the result from last section we choose to label the first 40 tweets as attack tweets The number 40 can be decided by either the number of tweets labeled as attack tweets by the decision tree classifier or the number of tweets that have a relative score S higher than a threshold The PNC and Wells Fargo bank have 3083k followers combined as of July 2018 According to eqution 5 from section 36 the severity Level can be computed The score would have a range from 678 102 to 130 103 depending on the value of beta This means that it could be a fairly important event because more than six percent of tweets mentioning the banks are talking about the DDoS attack However it could also be a minor attack because only a tiny portion of the people following those banks are complaining about the outage The value of beta should depend on the providers own definition of severity This model has two parameters that need to be provided One is alpha which is needed to determine the number of topics parameter Nt and the other is whether to use the optional decision tree filter Figures FIGREF49 and FIGREF50 provide experimental results on the model with different combinations of parameters We selected four combinations that have the best and worst performance All of the results can be found in appendix The model was trained on Bank of America tweets and tested on PNC and Wells Fargo tweets like in section 45 In the figure different lines have different values of alpha which ranges from 5 to 14 and the x axis is the number of ranked tweets labeled as attack tweets which have a range of 1 to 100 and the yaxis is the precision or recall of the algorithm and should be a number from 0 to 1 The results shows the decision tree layer increases precision at the cost of recall The models performance differs greatly with different alpha values while there lacks a good way to find the optimal one In this section we will discuss two questions Firstly we want to briefly discuss how good humans do on this task What we find out is though humans perform well on most of the tweets some tweets have proven to be challenging without additional information In this experiment we asked 18 members of our lab to classify 34 tweets picked from human annotated ones There are only two tweets which all the 18 answers agree with each other And there are two tweets that got exactly the same number of votes on both sides The two tweets are if these shoes get sold out before i can purchase them ima be so mad that i might just switch banks bankofamerica fix yourself and nothings for sure but if i were a pnc accountholder id get my online banking business done today httplatmsuv3qlo The second question we want to talk about is how to find out the optimal number of topics in each of the two LDA models As shown in the parameter tuning section the number of topics parameter greatly affects the performance of the model Weve tried several ways to figure out the number of topics First a set number of topics for different corpora We tried 30 different topic numbers on the Bank of America dataset and chose the best one and then tested it on the PNC data The result shows that this method does not perform well on different datasets We think it is because the number of topics should be a function of the number of documents or number of words in the corpus Then we tried to let the model itself determines the parameter There are some LDA variations that can do automatic number of topic inference The one we chose is the Hierarchical Dirichlet Process HDP mixture model which is a nonparametric Bayesian approach to clustering grouped data and a natural nonparametric generalization of Latent Dirichlet Allocation BIBREF31 However it does not perform very well Its precision is shown in figure FIGREF51 and recall is shown in figure FIGREF52 We think the reason for this kind of performance might be that tweets with the restriction of 140 characters have very different properties than usual documents like news or articles The last method is what was proposed in this paper An alpha equals 10 is what we chose and did a good job on the experiments But it is only an empirical result In this paper we proposed a novel weaklysupervised model with optional supervised classifier layer to determine the impact of a DenialofService attack in real time using twitter The approach computes an anomaly score based on the distribution of new topics and their KL divergence to the historical topics Then we tested the model on same and different entities to check the models performance and how well it generalize Our experiment result showed that the model achieved decent result on finding out tweets related to a DDoS attack even comparable to a supervised model baseline And it could generalize to different entities within the same domain Using the attack tweets we could get an estimation of the impact of the attack with a proposed formula There remain some interesting open questions for future research For example it is important to figure out a way to find out the optimal number of topics in the dataset We would also be interested to see how well this model will perform on other kind of event detection task if the optional classifier layer changes accordingly Figures FIGREF53 and FIGREF54 provide all of the experimental results on the model with different combinations of parameters </s> |
<s>[INST] #MeTooMA: Multi-Aspect Annotations of Tweets Related to the MeToo Movement [/INST] Over the last couple of years the MeToo movement has facilitated several discussions about sexual abuse Social media especially Twitter was one of the leading platforms where people shared their experiences of sexual harassment expressed their opinions and also offered support to victims A large portion of these tweets was tagged with a dedicated hashtag MeToo and it was one of the main trending topics in many countries The movement was viral on social media and the hashtag used over 19 million times in a year The MeToo movement has been described as an essential development against the culture of sexual misconduct by many feminists activists and politicians It is one of the primary examples of successful digital activism facilitated by social media platforms The movement generated many conversations on stigmatized issues like sexual abuse and violence which were not often discussed before because of the associated fear of shame or retaliation This creates an opportunity for researchers to study how people express their opinion on a sensitive topic in an informal setting like social media However this is only possible if there are annotated datasets that explore different linguistic facets of such social media narratives Twitter served as a platform for many different types of narratives during the MeToo movement BIBREF0 It was used for sharing personal stories of abuse offering support and resources to victims and expressing support or opposition towards the movement BIBREF1 It was also used to allege individuals of sexual misconduct refute such claims and sometimes voice hateful or sarcastic comments about the campaign or individuals In some cases people also misused hashtag to share irrelevant or uninformative content To capture all these complex narratives we decided to curate a dataset of tweets related to the MeToo movement that is annotated for various linguistic aspects In this paper we present a new dataset MeTooMA that contains 9973 tweets associated with the MeToo movement annotated for relevance stance hate speech sarcasm and dialogue acts We introduce and annotate three new dialogue acts that are specific to the movement Allegation Refutation and Justification The dataset also contains geographical information about the tweets from which country it was posted We expect this dataset would be of great interest and use to both computational and sociolinguists For computational linguists it provides an opportunity to model three new complex dialogue acts allegation refutation and justification and also to study how these acts interact with some of the other linguistic components like stance hate and sarcasm For sociolinguists it provides an opportunity to explore how a movement manifests in social media across multiple countries Table TABREF3 presents a summary of datasets that contain social media posts about sexual abuse and annotated for various labels BIBREF2 created a dataset of 2500 tweets for identification of malicious intent surrounding the cases of sexual assault The tweets were annotated for labels like accusational validation sensational Khatua et al BIBREF3 collected 07 million tweets containing hashtags such as MeToo AlyssaMilano harassed The annotated a subset of 1024 tweets for the following assaultrelated labels assault at the workplace by colleagues assault at the educational institute by teachers or classmates assault at public places by strangers assault at home by a family member multiple instances of assaults or a generic tweet about sexual violence BIBREF4 created the Reddit Domestic Abuse Dataset which contained 18336 posts annotated for 2 classes abuse and nonabuse BIBREF5 presented a dataset consisting of 5119 tweets distributed into recollection and nonrecollection classes The tweet was annotated as recollection if it explicitly mentioned a personal instance of sexual harassment Sharifirad et al BIBREF6 created a dataset with 3240 tweets labeled into three categories of sexism Indirect sexism casual sexism physical sexism SVAC Sexual Violence in Armed Conflict is another related dataset which contains reports annotated for six different aspects of sexual violence prevalence perpetrators victims forms location and timing Unlike all the datasets described above which are annotated for a single group of labels our dataset is annotated for five different linguistic aspects It also has more annotated samples than most of its contemporaries We focused our data collection over the period of October to December 2018 because October marked the one year anniversary of the MeToo movement Our first step was to identify a list of countries where the movement was trending during the data collection period To this end we used Googles interactive tool named MeTooRisingWithGoogle which visualizes search trends of the term MeToo across the globe This helped us narrow down our query space to 16 countries We then scraped 500 random posts from online sexual harassment support forums to help identify keywords or phrases related to the movement The posts were first manually inspected by the annotators to determine if they were related to the MeToo movement Namely if they contained selfdisclosures of sexual violence relevant information about the events associated with the movement references to news articles or advertisements calling for support for the movement We then processed the relevant posts to extract a set of unigrams and bigrams with high tfidf scores The annotators further pruned this set by removing irrelevant terms resulting in a lexicon of 75 keywords Some examples include Sexual Harassment TimesUp EveryDaySexism assaulted WhenIwas inappropriate workplace harassment groped NotOkay believe survivors WhyIDidntReport We then used Twitters public streaming API to query for tweets from the selected countries over the chosen threemonth time frame containing any of the keywords This resulted in a preliminary corpus of 39406 tweets We further filtered this data down to include only English tweets based on tweets language metadata field and also excluded short tweets less than two tokens Lastly we deduplicated the dataset based on the textual content Namely we removed all tweets that had more than 08 cosine similarity score on the unaltered text in tfidf space with any another tweet We employed this deduplication to promote more lexical diversity in the dataset After this filtering we ended up with a corpus of 9973 tweets Table TABREF14 presents the distribution of the tweets by country before and after the filtering process A large portion of the samples is from India because the MeToo movement has peaked towards the end of 2018 in India There are very few samples from Russia likely because of content moderation and regulations on social media usage in the country Figure FIGREF15 gives a geographical distribution of the curated dataset Due to the sensitive nature of this data we have decided to remove any personal identifiers such as names locations and hyperlinks from the examples presented in this paper We also want to caution the readers that some of the examples in the rest of the paper though censored for profanity contain offensive language and express a harsh sentiment We chose against crowdsourcing the annotation process because of the sensitive nature of the data and also to ensure a high quality of annotations We employed three domain experts who had advanced degrees in clinical psychology and gender studies The annotators were first provided with the guidelines document which included instructions about each task definitions of class labels and examples They studied this document and worked on a few examples to familiarize themselves with the annotation task They also provided feedback on the document which helped us refine the instructions and class definitions The annotation process was broken down into five subtasks for a given tweet the annotators were instructed to identify relevance stance hate speech sarcasm and dialogue act An important consideration was that the subtasks were not mutually exclusive implying that the presence of one label did not consequently mean an absence of any Here the annotators had to determine if the given tweet was relevant to the MeToo movement Relevant tweets typically include personal opinions either positive or negative experiences of abuse support for victims or links to MeToo related news articles Following are examples of a relevant tweet Officer name could be kicked out of the force after admitting he groped a woman at place festival last year His lawyer argued saying the constable shouldnt be punished because of the MeToo movement notokay sexualabuse and an irrelevant tweet Had a bit of break Went to the beautiful Port place and nearby areas Absolutely stunning as usual beautiful MeToo Australia auspol URL We expect this relevance annotation could serve as a useful filter for downstream modeling Stance detection is the task of determining if the author of a text is in favour or opposition of a particular target of interest BIBREF7 BIBREF8 Stance helps understand public opinion about a topic and also has downstream applications in information extraction text summarization and textual entailment BIBREF9 We categorized stance into three classes Support Opposition Neither Support typically included tweets that expressed appreciation of the MeToo movement shared resources for victims of sexual abuse or offered empathy towards victims Following is an example of a tweet with a Support stance Opinion MeToo gives a voice to victims while bringing attention to a nationwide stigma surrounding sexual misconduct at a local levelURL This should go on On the other hand Opposition included tweets expressing dissent over the movement or demonstrating indifference towards the victims of sexual abuse or sexual violence An example of an Opposition tweet is shown below The double standards and selective outrage make it clear that feminist concerns about power imbalances in the workplace arent principles but are tools to use against powerful men they hate and wish to destroy fakefeminism men Detection of hate speech in social media has been gaining interest from NLP researchers lately BIBREF10 BIBREF11 Our annotation scheme for hate speech is based on the work of BIBREF12 For a given tweet the annotators first had to determine if it contained any hate speech If the tweet was hateful they had to identify if the hate was Directed or Generalized Directed hate is targeted at a particular individual or entity whereas Generalized hate is targeted at larger groups that belonged to a particular ethnicity gender or sexual orientation Following are examples of tweets with Directed hate username were lit minus getting fcig mouthraped by some drunk chick MeToo no body cares because Im a male URL and Generalized hate For the men who r asking y not then y now u guys will still doubt her harrass her even more for y she shared her story immediately no matter what When your sister will tell her childhood story to u one day i challenge u guys to ask y not then y now Metoo username URL aholes Sarcasm detection has also become a topic of interest for computational linguistics over the last few years BIBREF13 BIBREF14 with applications in areas like sentiment analysis and affective computing Sarcasm was an integral part of the MeToo movement For example many women used the hashtag NoWomanEver to sarcastically describe some of their experiences with harassment We instructed the annotators to identify the presence of any sarcasm in a tweet either about the movement or about an individual or entity Following is an example of a sarcastic tweet was pound before it was a hashtag If you replace hashtag with the pound in the metoo you get pound me too Does that apply to name A dialogue act is defined as the function of a speakers utterance during a conversation BIBREF15 for example question answer request suggestion etc Dialogue Acts have been extensive studied in spoken BIBREF16 and written BIBREF17 conversations and have lately been gaining interest in social media BIBREF18 In this task we introduced three new dialogue acts that are specific to the MeToo movement Allegation Refutation and Justification Allegation This category includes tweets that allege an individual or a group of sexual misconduct The tweet could either be personal opinion or text summarizing allegations made against someone BIBREF19 The annotators were instructed to identify if the tweet includes the hypothesis of allegation based on firsthand account or a verifiable source confirming the allegation Following is an example of a tweet that qualifies as an Allegation More women accuse name of grave sexual misconducttwitter seethes with anger MeToo pervert Refutation This category contains tweets where an individual or an organization is denying allegations with or without evidence Following is an example of a Refutation tweet She is trying to use the MeToo movement to settle old scores says name1 after name2 levels sexual assault allegations against him Justification The class includes tweets where the author is justifying their actions These could be alleged actions in the real world eg allegation of sexual misconduct or some action performed on twitter eg supporting someone who was alleged of misconduct Following is an example of a tweet that would be tagged as Justification I actually did try to report it but he and of his friends got together and lied to the police about it WhyIDidNotReport This section includes descriptive and quantitative analysis performed on the dataset We evaluated interannotator agreements using Krippendorffs alpha Kalpha BIBREF20 Kalpha unlike simple agreement measures accounts for chance correction and class distributions and can be generalized to multiple annotators Table TABREF27 summarizes the Kalpha measures for all the annotation tasks We observe very strong agreements for most of the tasks with a maximum of 092 for the relevance task The least agreement observed was for the hate speech task at 078 Per recommendations in BIBREF21 we conclude that these annotations are of good quality We chose a straightforward approach of majority decision for label adjudication if two or more annotators agreed on assigning a particular class label In cases of discrepancy the labels were adjudicated manually by the authors Table TABREF28 shows a distribution of class labels after adjudication Figure FIGREF24 presents a distribution of all the tweets by their country of origin As expected a large portion of the tweets across all classes are from India which is consistent with Table TABREF14 Interestingly the US contributes comparatively a smaller proportion of tweets to Justification category and likewise UK contributes a lower portion of tweets to the Generalized Hate category Further analysis is necessary to establish if these observations are statistically significant We conducted a simple experiment to understand the linguistic similarities or lack thereof for different pairs of class labels both within and across tasks To this end for each pair of labels we converted the data into its tfidf representation and then estimated Pearson Spearman and Kendall Tau correlation coefficients and also the corresponding p values The results are summarized in Table TABREF32 Overall the correlation values seem to be on a lower end with maximum Pearsons correlation value obtained for the label pair Justification Support maximum Kendall Taus correlation for Allegation Support and maximum Spearmans correlation for Directed Hate Generalized Hate The correlations are statistically significant p 005 for three pairs of class labels Directed Hate Generalized Hate Directed Hate Opposition Sarcasm Opposition Sarcasm and Allegation also have statistically significant p values for Pearson and Spearman correlations We used SAGE BIBREF22 a topic modelling method to identify keywords associated with the various class labels in our dataset SAGE is an unsupervised generative model that can identify words that distinguish one part of the corpus from rest For our keyword analysis we removed all the hashtags and only considered tokens that appeared at least five times in the corpus thus ensuring they were representative of the topic Table TABREF25 presents the top five keywords associated with each class and also their salience scores Though Directed and Generalized hate are closely related topics there is not much overlap between the top 5 salient keywords suggesting that there are linguistic cues to distinguish between them The word predators is strongly indicative of Generalized Hate which is intuitive because it is a term often used to describe people who were accused of sexual misconduct The word lol being associated with Sarcasm is also reasonably intuitive because of sarcasms close relation with humour Figure FIGREF29 presents a word cloud representation of the data where the colours are assigned based on NRC emotion lexicon BIBREF23 green for positive and red for negative We also analyzed all the classes in terms of Valence Arousal and Dominance using the NRC VAD lexicon BIBREF24 The results are summarized in Figure FIGREF33 Of all the classes DirectedHate has the largest valence spread which is likely because of the extreme nature of the opinions expressed in such tweets The spread for the dominance is fairly narrow for all class labels with the median score slightly above 05 suggesting a slightly dominant nature exhibited by the authors of the tweets This paper introduces a new dataset containing tweets related to the MeToo movement It may involve opinions over socially stigmatized issues or selfreports of distressing incidents Therefore it is necessary to examine the social impact of this exercise the ethics of the individuals concerned with the dataset and its limitations Mental health implications This dataset open sources posts curated by individuals who may have undergone instances of sexual exploitation in the past While we respect and applaud their decision to raise their voices against their exploitation we also understand that their revelations may have been met with public backlash and apathy in both the virtual as well as the real world In such situations where the social reputation of both accuser and accused may be under threat mental health concerns become very important As survivors recount their horrific episodes of sexual harassment it becomes imperative to provide them with therapeutic care BIBREF25 as a safeguard against mental health hazards Such measures if combined with the integration of mental health assessment tools in social media platforms can make victims of sexual abuse feel more empowered and selfcontemplative towards their revelations Use of MeTooMA dataset for population studies We would like to mention that there have been no attempts to conduct populationcentric analysis on the proposed dataset The analysis presented in this dataset should be seen as a proof of concept to examine the instances of MeToo movement on Twitter The authors acknowledge that learning from this dataset cannot be used asis for any direct social interventions Network sampling of realworld users for any experimental work beyond this dataset would require careful evaluation beyond the observational analysis presented herein Moreover the findings could be used to assist already existing human knowledge Experiences of the affected communities should be recorded and analyzed carefully which could otherwise lead to social stigmatization discrimination and societal bias Enough care has been ensured so that this work does not come across as trying to target any specific individual for their personal stance on the issues pertaining to the social theme at hand The authors do not aim to vilify individuals accused in the MeToo cases in any manner Our work tries to bring out general trends that may help researchers develop better techniques to understand mass unorganized virtual movements Effect on marginalized communities The authors recognize the impact of the MeToo movement on socially stigmatized populations like LGBTQIA The MeToo movement provided such individuals with the liberty to express their notions about instances of sexual violence and harassment The movement acted as a catalyst towards implementing social policy changes to benefit the members of these communities Hence it is essential to keep in mind that any experimental work undertaken on this dataset should try to minimize the biases against the minority groups which might get amplified in cases of sudden outburst of public reactions over sensitive media discussions Limitations of individual consent Considering the mental health aspects of the individuals concerned social media practitioners should vary of making automated interventions to aid the victims of sexual abuse as some individuals might not prefer to disclose their sexual identities or notions Concerned social media users might also repeal their social media information if found out that their personal information may be potentially utilised for computational analysis Hence it is imperative to seek subtle individual consent before trying to profile authors involved in online discussions to uphold personal privacy The authors would like to formally propose some ideas on possible extensions of the proposed dataset The rise of online hate speech and its related behaviours like cyberbullying has been a hot topic of research in gender studies BIBREF26 Our dataset could be utilized for extracting actionable insights and virtual dynamics to identify gender roles for analyzing sexual abuse revelations similar to BIBREF27 The dataset could be utilized by psycholinguistics for extracting contextualized lexicons to examine how influential people are portrayed on public platforms in events of mass social media movements BIBREF28 Interestingly such analysis may help linguists determine the power dynamics of authoritative people in terms of perspective and sentiment through campaign modelling Marginalized voices affected by mass social movements can be studied through polarization analysis on graphbased simulations of the social media networks Based on the data gathered from these nodes community interactions could be leveraged to identify indigenous issues pertaining to societal unrest across various sections of the societyBIBREF29 Challenge Proposal The authors of the paper would like to extend the present work as a challenge proposal for building computational semantic analysis systems aimed at online social movements In contrast to already available datasets and existing challenges we propose tasks on detecting hate speech sarcasm stance and relevancy that will be more focused on social media activities surrounding revelations of sexual abuse and harassment The tasks may utilize the messagelevel text linked images tweetlevel metadata and userlevel interactions to model systems that are Fair Accountable Interpretable and Responsible FAIR Research ideas emerging from this work should not be limited to the above discussion If needed supplementary data required to enrich this dataset can be collected utilizing Twitter API and JSON records for exploratory tasks beyond the scope of the paper In this paper we presented a new dataset annotated for five different linguistic aspects relevance stance hate speech sarcasm and dialogue acts To our knowledge there are no datasets out there that provide annotations across so many different dimensions This allows researchers to perform various multilabel and multiaspect classification experiments Additionally researchers could also address some interesting questions on how different linguistic components influence each other eg does understanding ones stance help in better prediction of hate speech In addition to these exciting computational challenges we expect this data could be useful for socio and psycholinguists in understanding the language used by victims when disclosing their experiences of abuse Likewise they could analyze the language used by alleged individuals in justifying their actions It also provides a chance to examine the language used to express hate in the context of sexual abuse In the future we would like to propose challenge tasks around this data where the participants will have to build computational models to capture all the different linguistic aspects that were annotated We expect such a task would drive researchers to ask more interesting questions find limitations of the dataset propose improvements and provide interesting insights </s> |
<s>[INST] Introducing RONEC -- the Romanian Named Entity Corpus [/INST] Language resources are an essential component in entire RD domains From the humble but vast repositories of monolingual texts that are used by the newest language modeling approaches like BERT and GPT to parallel corpora that allows our machine translation systems to inch closer to human performance to the more specialized resources like WordNets that encode semantic relations between nodes these resources are necessary for the general advancement of Natural Language Processing which eventually evolves into real apps and services we are already taking for granted We introduce RONEC the ROmanian Named Entity Corpus a free opensource resource that contains annotated named entities in copyright free text A named entity corpus is generally used for Named Entity Recognition NER the identification of entities in text such as names of persons locations companies dates quantities monetary values etc This information would be very useful for any number of applications from a general information extraction system down to taskspecific apps such as identifying monetary values in invoices or product and company references in customer reviews We motivate the need for this corpus primarily because for Romanian there is no other such corpus This basic necessity has sharply arisen as we while working on a different project have found out there are no usable resources to help us in an Information Extraction task we were unable to extract people locations or datesvalues This constituted a major roadblock with the only solution being to create such a corpus ourselves As the corpus was outofscope for this project the work was done privately outside the umbrella of any authors affiliations this is why we are able to distribute this corpus completely free The current landscape in Romania regarding language resources is relatively unchanged from the outline given by the METANET project over six years ago The indepth analysis performed in this Europeanwide Horizon2020funded project revealed that the Romanian language falls in the fragmentary support category just above the last weaknone category see the languagesupport matrix in BIBREF3 This is why in 20192020 we are able to present the first NER resource for Romanian We note that while fragmentary there are a few related language resources available but none that specifically target named entities ROCO BIBREF4 is a Romanian journalistic corpus that contains approx 71M tokens It is rich in proper names numerals and named entities The corpus has been automatically annotated at wordlevel with morphosyntactic information MSD annotations Released in 2016 ROMBAC BIBREF5 is a Romanian text corpus containing 41M words divided in relatively equal domains like journalism legalese fiction medicine etc Similarly to ROCO it is automatically annotated at word level with MSD descriptors The much larger and recently released CoRoLa corpus BIBREF6 contains over 1B words similarly automatically annotated In all these corpora the named entities are not a separate category the texts are morphologically and syntactically annotated and all proper nouns are marked as such NP without any other annotation or assigned category Thus these corpora cannot be used in a true NER sense Furthermore annotations were done automatically with a tokenizertaggerparser and thus are of slightly lower quality than one would expect of a goldstandard corpus The corpus at its current version 10 is composed of 5127 sentences annotated with 16 classes for a total of 26377 annotated entities The 16 classes are PERSON NATRELPOL ORG GPE LOC FACILITY PRODUCT EVENT LANGUAGE WORKOFART DATETIME PERIOD MONEY QUANTITY NUMERICVALUE and ORDINAL It is based on copyrightfree text extracted from Southeast European Times SETimes The news portal has published news and views from Southeast Europe in ten languages including Romanian SETimes has been used in the past for several annotated corpora including parallel corpora for machine translation For RONEC we have used a handpicked selection of sentences belonging to several categories see table TABREF16 for stylistic examples The corpus contains the standard diacritics in Romanian letters and are written with a comma not with a cedilla like and In Romanian many older texts are written with cedillas instead of commas because full Unicode support in Windows came much later than the classic extended Ascii which only contained the cedilla letters The 16 classes are inspired by the OntoNotes5 corpus BIBREF7 as well as the ACE Automatic Content Extraction English Annotation Guidelines for Entities Version 66 20080613 BIBREF8 Each class will be presented in detail with examples in the section SECREF3 A summary of available classes with word counts for each is available in table TABREF18 The corpus is available in two formats BRAT and CoNLLU Plus As the corpus was developed in the BRAT environment it was natural to keep this format asis BRAT is an online environment for collaborative text annotation a webbased tool where several people can mark words subword pieces multiple word expressions can link them together by relations etc The backend format is very simple given a text file that contains raw sentences in another text file every annotated entity is specified by the startend character offset as well as the entity type one per line RONEC is exported in the BRAT format as readytouse in the BRAT annotator itself The corpus is presplit into subfolders and contains all the extra files such as the entity list etc needed to directly start an eventual editextension of the corpus Example rawuntokenized sentences Tot n cadrul etapei a 2a a avut loc ntlnirea Vardar Skopje SC Pick Szeged care sa ncheiat la egalitate 24 24 I sa decernat Premiul Nobel pentru literatur pe anul 1959 Example annotation format T1 ORDINAL 21 26 a 2a T2 ORGANIZATION 50 63 Vardar Skopje T3 ORGANIZATION 66 82 SC Pick Szeged T4 NUMERICVALUE 116 118 24 T5 NUMERICVALUE 121 123 24 T6 DATETIME 175 184 anul 1959 The CoNLLU Plus format extends the standard CoNLLU which is used to annotate sentences and in which many corpora are found today The CoNLLU format annotates one word per line with 10 distinct columns tab separated nolistsep ID word index FORM unmodified word from the sentence LEMMA the words lemma UPOS Universal partofspeech tag XPOS Languagespecific partofspeech tag FEATS List of morphological features from the universal feature inventory or from a defined languagespecific extension HEAD Head of the current word which is either a value of ID or zero DEPREL Universal dependency relation to the HEAD or a defined languagespecific subtype of one DEPS Enhanced dependency graph in the form of a list of headdeprel pairs MISC Miscellaneous annotations such as space after word The CoNLLU Plus extends this format by allowing a variable number of columns with the restriction that the columns are to be defined in the header For RONEC we define our CoNLLU Plus format as the standard 10 columns plus another extra column named RONECCLASS This column has the following format nolistsep noitemsep each named entity has a distinct id in the sentence starting from 1 as an entity can span several words all words that belong to it have the same id no relation to word indexes the first word belonging to an entity also contains its class eg word John in entity John Smith will be marked as 1PERSON a nonentity word is marked with an asterisk Table TABREF37 shows the CoNLLU Plus format where for example a 2a is an ORDINAL entity spanning 3 words The first word a is marked in this last column as 1ORDINAL while the following words just with the id 1 The CoNLLU Plus format we provide was created as follows 1 annotate the raw sentences using the NLPCube tool for Romanian it provides everything from tokenization to parsing filling in all attributes in columns 110 2 align each token with the humanmade entity annotations from the BRAT environment the alignment is done automatically and is errorfree and fill in column 11 For the English language we found two categories of NER annotations to be more prominent CoNLL and ACEstyle Because CoNLL only annotates a few classes depending on the corpora starting from the basic three PERSON ORGANIZATION and LOCATION up to seven we chose to follow the ACEstyle with 18 different classes After analyzing the ACE guide we have settled on 16 final classes that seemed more appropriate for Romanian seen in table TABREF18 In the following subsections we will describe each class in turn with a few examples Some examples have been left in Romanian while some have been translated in English for the readers convenience In the examples at the end of each class description translations in English are colored for easier reading Persons including fictive characters We also mark common nouns that refer to a person or several including pronouns us them they but not articles eg in an individual we dont mark an Positions are not marked unless they directly refer to the person The presidential counselor has advised that a new counselor position is open here we mark presidential counselor because it refers to a person and not the counselor at the end of the sentence as it refers only to a position Locul doi ia revenit romncei Otilia Aionesei o elev de 17 ani green55blueThe second place was won by Otilia Aionesei a 17 year old student Ministrul bulgar pentru afaceri europene Meglena Kuneva green55blueThe Bulgarian Minister for European Affairs Meglena Kuneva These are nationalities or religious or political groups We include words that indicate the nationality of a person group or productobject Generally words marked as NATRElPOL are adjectives avionul american green55bluethe American airplane Grupul olandez green55bluethe Dutch group Grecii ii vor alege preedintele green55blueThe Greeks will elect their president Companies agencies institutions sports teams groups of people These entities must have an organizational structure We only mark full organizational entities not fragments divisions or substructures Universitatea Politehnica Bucureti a decis green55blueThe Politehnic University of Bucharest has decided Adobe Inc a lansat un nou produs green55blueAdobe Inc has launched a new product Geopolitical entities countries counties cities villages GPE entities have all of the following components 1 a population 2 a welldefined governingorganizing structure and 3 a physical location GPE entities are not subentities like a neighbourhood from a city Armin van Buuren sa nscut n Leiden green55blueArmin van Buuren was born in Leiden USA ramane indiferent ameninrilor Coreei de Nord green55blueUSA remains indifferent to North Koreas threats Nongeopolitical locations mountains seas lakes streets neighbourhoods addresses continents regions that are not GPEs We include regions such as Middle East continents like Central America or East Europe Such regions include multiple countries each with its own government and thus cannot be GPEs Pe DN7 PetroaniObria Lotrului carosabilul era umed acoperit cca 1 cm cu zpad iar de la Obria Lotrului la staiunea Vidra stratul de zpad era de 56 cm green55blueOn DN7 PetroaniObria Lotrului the road was wet covered about 1cm with snow and from Obria Lotrului to Vidra resort the snow depth was around 56 cm Produsele comercializate n Europa de Est au o calitate inferioar celor din vest green55blueProducts sold in East Europe have a lower quality than those sold in the west Buildings airports highways bridges or other functional structures built by humans Buildings or other structures which house people such as homes factories stadiums office buildings prisons museums tunnels train stations etc named or not Everything that falls within the architectural and civil engineering domains should be labeled as a FACILITY We do not mark structures composed of multiple and distinct substructures like a named area that is composed of several buildings or microstructures such as an apartment as it a unit of an apartment building However larger named functional structures can still be marked such as terminal X of an airport Autostrada A2 a intrat n reparaii pe o band ns pe A1 nu au fost nc ncepute lucrrile green55blueRepairs on one lane have commenced on the A2 highway while on A1 no works have started yet Aeroportul Henri Coand ar putea sa fie extins cu un nou terminal green55blueHenri Coand Airport could be extended with a new terminal Objects cars food items anything that is a product including software such as Photoshop Word etc We dont mark services or processes With very few exceptions such as software products PRODUCT entities have to have physical form be directly manmade We dont mark entities such as credit cards written proofs etc We dont include the producers name unless its embedded in the name of the product Maina cumprat este o Mazda green55blueThe bought car is a Mazda Sau cumprat 5 Ford Taurus i 2 autobuze Volvo green55blue5 Ford Taurus and 2 Volvo buses have been acquired Named events Storms egSandy battles wars sports events etc We dont mark sports teams they are ORGs matches eg SteauaRapid will be marked as two separate ORGs even if they refer to a football match between the two teams but the match is not specific Events have to be significant with at least national impact not local Rzboiul cel Mare Rzboiul Naiunilor denumit n timpul celui de Al Doilea Rzboi Mondial Primul Rzboi Mondial a fost un conflict militar de dimensiuni mondiale green55blueThe Great War War of the Nations as it was called during the Second World War the First World War was a globalscale military conflict This class represents all languages Romnii din Romnia vorbesc romn green55blueRomanians from Romania speak Romanian n Moldova se vorbete rusa i romna green55blueIn Moldavia they speak Russian and Romanian Books songs TV shows pictures everything that is a work of artculture created by humans We mark just their name We dont mark laws Accesul la Mona Lisa a fost temporar interzis vizitatorilor green55blueAccess to Mona Lisa was temporarily forbidden to visitors n aceast sear la Vrei sa Fii Miliardar vom avea un invitat special green55blueThis evening in Who Wants To Be A Millionaire we will have a special guest Date and time values We will mark full constructions not parts if they refer to the same moment eg a comma separates two distinct DATETIME entities only if they refer to distinct moments If we have a well specified period eg between 2022 hours we mark it as PERIOD otherwise less well defined periods are marked as DATETIME eg last summer September Wednesday three days Ages are marked as DATETIME as well Prepositions are not included Te rog s vii aici n cel mult o or nu mine sau poimine green55bluePlease come here in one hour at most not tomorrow or the next day Actul sa semnat la orele 16 green55blueThe paper was signed at 16 hours August este o lun secetoas green55blueAugust is a dry month Pe data de 20 martie ntre orele 2022 va fi oprit alimentarea cu curent green55blueOn the 20th of March between 2022 hours electricity will be cutoff Periodstime intervals Periods have to be very well marked in text If a period is not like ab then it is a DATETIME Spectacolul are loc ntre 1 i 3 Aprilie green55blueThe show takes place between 1 and 3 April n prima jumtate a lunii iunie va avea loc evenimentul de dou zile green55blueIn the first half of June the twoday event will take place Money monetary values including units eg USD RON lei francs pounds Euro etc written with number or letters Entities that contain any monetary reference including measuring units will be marked as MONEY eg 10sqm 50 lei per hour Words that are not clear values will not be marked such as an amount of money he received a coin Primarul a semnat un contract n valoare de 10 milioane lei noi echivalentul a aproape 26m EUR green55blueThe mayor signed a contract worth 10 million new lei equivalent of almost 26m EUR Measurements such as weight distance etc Any type of quantity belongs in this class Conductorul auto avea peste 1gml alcool n snge fiind oprit deoarece a fost prins cu peste 120 kmh n localitate green55blueThe car driver had over 1gml blood alcohol and was stopped because he was caught speeding with over 120kmh in the city Any numeric value including phone numbers written with letters or numbers or as percents which is not MONEY QUANTITY or ORDINAL Raportul XII2 arat 4 552 de investitori iar structura de portofoliu este cont curent 005 certificate de trezorerie 6696 depozite bancare 1353 obligaiuni municipale 1946 green55blueThe XII2 report shows 4 552 investors and the portfolio structure is current account 005 treasury bonds 6696 bank deposits 1353 municipal bonds 1946 The first the second last 30th etc An ordinal must imply an order relation between elements For example second grade does not involve a direct order relation it indicates just a succession in grades in a school system Primul loc a fost ocupat de echipa Germaniei green55blueThe first place was won by Germanys team The corpus creation process involved a small number of people that have voluntarily joined the initiative with the authors of this paper directing the work Initially we searched for NER resources in Romanian and found none Then we looked at English resources and read the indepth ACE guide out of which a 16class draft evolved We then identified a copyright free text from which we handpicked sentences to maximize the amount of entities while maintaining style balance The annotation process was a trialanderror with cycles composed of annotation discussing confusing entities updating the annotation guide schematic and going through the corpus section again to correct entities following guide changes The annotation process was done online in BRAT The actual annotation involved 4 people has taken about 6 months as work was volunteerbased we could not have reached for 100 time commitment from the people involved and followed the steps nolistsep Each person would annotate the full corpus this included the cycles of shaping up the annotation guide and reannotation Interannotator agreement ITA at this point was relatively low at 6070 especially for a number of classes We then automatically merged all annotations with the following criterion if 3 of the 4 annotators agreed on an entity classstartstop then it would go unchanged otherwise mark the entity longest span as CONFLICTED Two teams were created each with two persons Each team annotated the full corpus again starting from the previous step At this point classaverage ITA has risen to over 85 Next the same automatic merging happened this time entities remained unchanged if both annotations agreed Finally one of the authors went through the full corpus one more time correcting disagreements We would like to make a few notes regarding classes and interannotator agreements nolistsep noitemsep Classes like ORGANIZATION NATRELPOL LANGUAGE or GPEs have the highest ITA over 98 They are pretty clear and distinct from other classes The DATETIME class also has a high ITA with some overlap with PERIOD annotators could fallback if they were not sure that an expression was a PERIOD and simply mark it as DATETIME WORKOFART and EVENTs have caused some problems because the scope could not be properly defined from just one sentence For example a fair in a city could be a local event but could also be a national periodic event MONEY QUANTITY and ORDINAL all are more specific classes than NUMERICVALUE So in cases where a numeric value has a unit of measure by it it should become a QUANTITY not a NUMERICVALUE However this specificity has created some confusion between these classes just like with DATETIME and PERIOD The ORDINAL class is a bit ambiguous because even though it ranks higher than NUMERICVALUE it is the least diverse most of the entities following the same patterns PRODUCT and FACILITY classes have the lowest ITA by far less than 40 in the first annotation cycle less than 70 in the second We actually considered removing these classes from the annotation process but to try to mimic the OntoNotes classes as much as possible we decided to keep them in There were many cases where the annotators disagreed about the scope of words being facilities or products Even in the ACE guidelines these two classes are not very well documented with examples of what is and what is not a PRODUCT or FACILITY Considering that these classes are in our opinion of the lowest importance among all the classes a lower ITA was accepted Finally we would like to address the semantic scope of the entities for example for class PERSON we do not annotate only proper nouns NPs but basically any reference to a person eg through pronouns she job position titles common nouns such as father etc We do this because we would like a highcoverage corpus where entities are marked as more semanticallyoriented rather than syntactically in the same way ACE entities are more encompassing than CoNLL entities We note that for example if one would like strict proper noun entities it is very easy to extract from a PERSON multiword entity only those words which are syntactically marked by any tagger as NPs We have presented RONEC the first Named Entity Corpus for the Romanian language At its current version in its 5127 sentences we have 26377 annotated entities in 16 different classes The corpus is based on copyright free text and is released as opensource free to use and extend We hope that in time this corpus will grow in size and mature towards a strong resource for Romanian For this to happen we have released the corpus in two formats CoNLLU PLus which is a textbased tabseparated pretokenized and annotated format that is simple to use and BRAT which is practically plugandplay into the BRAT web annotation tool where anybody can add and annotate new sentences Also in the GitHub repo there are automatic alignment and conversion script to and from the two formats so they could easily be exported between Finally we have also provided an annotation guide that we will improve and in time evolve into a full annotation document like the ACE Annotation Guidelines for Entities V66 BIBREF8 </s> |
<s>[INST] A General-Purpose Tagger with Convolutional Neural Networks [/INST] Recently character composition models have shown great success in many NLP tasks mainly because of their robustness in dealing with outofvocabulary OOV words by capturing subword informations Among the character composition models bidirectional long shortterm memory LSTM models and convolutional neural networks CNN are widely applied in many tasks eg partofspeech POS tagging BIBREF0 BIBREF1 named entity recognition BIBREF2 language modeling BIBREF3 BIBREF4 machine translation BIBREF5 and dependency parsing BIBREF6 BIBREF7 In this paper we present a stateoftheart generalpurpose tagger that uses CNNs both to compose word representations from characters and to encode context information for tagging We show that the CNN model is more capable than the LSTM model for both functions and more stable for unseen or unnormalized words which is the main benefit of character composition models Yu2017 compared the performance of CNN and LSTM as character composition model for dependency parsing and concluded that CNN performs better than LSTM In this paper we show that this is also the case for POS tagging Furthermore we extend the scope to morphological tagging and supertagging in which the tag set is much larger and longdistance dependencies between words are more important In these three tagging tasks we compare our tagger with the bilstmaux tagger BIBREF1 and the CRFbased morphological tagger MarMot BIBREF8 The CNN tagger shows robust performance accross the three tasks and achieves the highest average accuracy in all tasks It significantly outperforms LSTM in morphological tagging and outperforms both baselines in supertagging by a large margin To test the robustness of the taggers against the OOV problem we also conduct experiments using artificially constructed unnormalized text by corrupting words in the normal dev set Again the CNN tagger outperforms the two baselines by a very large margin Therefore we conclude that our CNN tagger is a robust stateoftheart generalpurpose tagger that can effectively compose word representation from characters and encode context information Our proposed CNN tagger has two main components the character composition model and the context encoding model Both components are essentially CNN models capturing different levels of information the first CNN captures morphological information from character ngrams the second one captures contextual information from word ngrams Figure FIGREF2 shows a diagram of both models of the tagger The character composition model is similar to Yu2017 where several convolution filters are used to capture character ngrams of different sizes The outputs of each convolution filter are fed through a max pooling layer and the pooling outputs are concatenated to represent the word The context encoding model captures the context information of the target word by scanning through the word representations of its context window The word representation could be only word embeddings INLINEFORM0 only composed vectors INLINEFORM1 or the concatenation of both INLINEFORM2 A context window consists of N words to both sides of the target word and the target word itself To indicate the target word we concatenate a binary feature to each of the word representations with 1 indicating the target and 0 otherwise similar to Vu2016 Additional to the binary feature we also concatenate a position embedding to encode the relative position of each context word similar to Gehring2017 For the character composition model we take a fixed input size of 32 characters for each word with padding on both sides or cutting from the middle if needed We apply four convolution filters with sizes of 3 5 7 and 9 Each filter has an output channel of 25 dimensions thus the composed vector is 100dimensional We apply Gaussian noise with standard deviation of 01 is applied on the composed vector during training For the context encoding model we take a context window of 15 7 words to both sides of the target word as input and predict the tag of the target word We also apply four convolution filters with sizes of 2 3 4 and 5 each filter is stacked by another filter with the same size and the output has 128 dimensions thus the context representation is 512dimensional We apply one 512dimensional hidden layer with ReLU nonlinearity before the prediction layer We apply dropout with probability of 01 after the hidden layer during training The model is trained with averaged stochastic gradient descent with a learning rate of 01 momentum of 09 and minibatch size of 100 We apply L2 regularization with a rate of INLINEFORM0 on all the parameters of the network except the embeddings We use treebanks from version 12 of Universal Dependencies UD and in the case of several treebanks for one language we only use the canonical treebank There are in total 22 treebanks as in Plank2016 Each treebank splits into train dev and test sets we use the dev sets for early stop and test on the test sets We evaluate our method on three tagging tasks POS tagging Pos morphological tagging Morph and supertagging Stag For POS tagging we use Universal POS tags which is an extension of Petrov2012 The universal tag set tries to capture the universal properties of words and facilitate crosslingual learning Therefore the tag set is very coarse and leaves out most of the languagespecific properties to morphological features Morphological tags encode the languagespecific morphological features of the words eg number gender case They are represented in the UD treebanks as one string which contains several keyvalue pairs of morphological features Supertags BIBREF9 are tags that encode more syntactic information than standard POS tags eg the head direction or the subcategorization frame We use dependencybased supertags BIBREF10 which are extracted from the dependency treebanks Adding such tags into feature models of statistical dependency parsers significantly improves their performance BIBREF11 BIBREF12 Supertags can be designed with different levels of granularity We use the standard Model 1 from Ouchi2014 where each tag consists of head direction dependency label and dependent direction Even with the basic supertag model the Stag task is more difficult than Pos and Morph because it generally requires taking longdistance dependencies between words into consideration We select these tasks as examples for tagging applications because they differ strongly in tag set sizes Generally the Pos set sizes for all the languages are no more than 17 and Stag set sizes are around 200 When treating morphological features as a string ie not splitting into keyvalue pairs the sizes of the Morph tag sets range from about 100 up to 2000 As baselines to our models we take the two stateoftheart taggers MarMot denoted as CRF and bilstmaux denoted as LSTM We train the taggers with the recommended hyperparameters from the documentation To ensure a fair comparison especially between LSTM and CNN we generally treat the three tasks equally and do not apply taskspecific tuning on them ie using the same features and same model hyperparameters in each single task Also we do not use any pretrained word embeddings For the LSTM tagger we use the recommended hyperparameters in the documentation including 64dimensional word embeddings INLINEFORM0 and 100dimensional composed vectors INLINEFORM1 We train the INLINEFORM2 INLINEFORM3 and INLINEFORM4 models as in Plank2016 We train the CNN taggers with the same dimensionalities for word representations For the CRF tagger we predict Pos and Morph jointly as in the standard setting for MarMot which performs much better than with separate predictions as shown in Mueller2013 and in our preliminary experiments Also it splits the morphological tags into keyvalue pairs whereas the neural taggers treat the whole string as a tag We predict Stag as a separate task The test results for the three tasks are shown in Table TABREF17 in three groups The first group of seven columns are the results for Pos where both LSTM and CNN have three variations of input features word only INLINEFORM0 character only INLINEFORM1 and both INLINEFORM2 For Morph and Stag we only use the INLINEFORM3 setting for both LSTM and CNN On macroaverage three taggers perform close in the Pos task with the CNN tagger being slightly better In the Morph task CNN is again slightly ahead of CRF while LSTM is about 2 points behind In the Stag task CNN outperforms both taggers by a large margin 2 points higher than LSTM and 8 points higher than CRF While considering the input features of the LSTM and CNN taggers both taggers perform close with only INLINEFORM0 as input which suggests that the two taggers are comparable in encoding context for tagging Pos However with only INLINEFORM1 CNN performs much better than LSTM 9554 vs 9261 and close to INLINEFORM2 9618 Also INLINEFORM3 consistently outperforms INLINEFORM4 for all languages This suggests that the CNN model alone is capable of learning most of the information that the wordlevel model can learn while the LSTM model is not The more interesting cases are Morph and Stag where CNN performs much higher than LSTM We hypothesize three possible reasons to explain the considerably large difference First the LSTM tagger may be more sensitive to hyperparameters and requires task specific tuning We use the same setting which is tuned for the Pos task thus it underperforms in the other tasks Second the LSTM tagger may not deal well with large tag sets The tag set size for Morph are larger than Pos in orders of magnitudes especially for Czech Basque Finnish and Slovene all of which have more than 1000 distinct Morph tags in the training data and the LSTM performs poorly on these languages Third the LSTM has theoretically unlimited access to all the tokens in the sentence but in practice it might not learn the context as good as the CNN In the LSTM model the information of longdistance contexts will gradually fade away during the recurrence whereas in the CNN model all words are treated equally as long as they are in the context window Therefore the LSTM underperforms in the Stag task where the information from longdistance context is more important It is a common scenario to use a model trained with news data to process text from social media which could include intentional or unintentional misspellings Unfortunately we do not have social media data to test the taggers However we design an experiment to simulate unnormalized text by systematically editing the words in the dev sets with three operations insertion deletion and substitution For example if we modify a word abcdef at position 2 0based the modified words would be abxcdef abdef and abxdef where x is a random character from the alphabet of the language For each operation we create a group of modified dev sets where all words longer than two characters are edited by the operation with a probability of 025 05 075 or 1 For each language we use the models trained on the normal training sets and predict Pos for the three groups of modified dev set The average accuracies are shown in Figure FIGREF19 Generally all models suffer from the increasing degrees of unnormalized texts but CNN always suffers the least In the extreme case where almost all words are unnormalized CNN performs 4 to 8 points higher than LSTM and 4 to 11 points higher than CRF This suggests that the CNN is more robust to misspelt words While looking into the specific cases of misspelling CNN is more sensitive to insertion and deletion while CRF and LSTM are more sensitive to substitution In this paper we propose a generalpurpose tagger that uses two CNNs for both character composition and context encoding On the universal dependency treebanks v12 the tagger achieves stateoftheart results for POS tagging and morphological tagging and to the best of our knowledge it also performs best for supertagging The tagger works well across different tagging tasks without tuning the hyperparameters and it is also robust against unnormalized text </s> |
<s>[INST] Towards Machine Comprehension of Spoken Content: Initial TOEFL Listening Comprehension Test by Machine [/INST] With the popularity of shared videos social networks online course etc the quantity of multimedia or spoken content is growing much faster beyond what human beings can view or listen to Accessing large collections of multimedia or spoken content is difficult and timeconsuming for humans even if these materials are more attractive for humans than plain text information Hence it will be great if the machine can automatically listen to and understand the spoken content and even visualize the key information for humans This paper presents an initial attempt towards the above goal machine comprehension of spoken content In an initial task we wish the machine can listen to and understand an audio story and answer the questions related to that audio content TOEFL listening comprehension test is for human English learners whose native language is not English This paper reports how todays machine can perform with such a test The listening comprehension task considered here is highly related to Spoken Question Answering SQA BIBREF0 BIBREF1 In SQA when the users enter questions in either text or spoken form the machine needs to find the answer from some audio files SQA usually worked with ASR transcripts of the spoken content and used information retrieval IR techniques BIBREF2 or relied on knowledge bases BIBREF3 to find the proper answer Sibyl BIBREF4 a factoid SQA system used some IR techniques and utilized several levels of linguistic information to deal with the task Question Answering in Speech Transcripts QAST BIBREF5 BIBREF6 BIBREF7 has been a wellknown evaluation program of SQA for years However most previous works on SQA mainly focused on factoid questions like What is name of the highest mountain in Taiwan Sometimes this kind of questions may be correctly answered by simply extracting the key terms from a properly chosen utterance without understanding the given spoken content More difficult questions that cannot be answered without understanding the whole spoken content seemed rarely dealt with previously With the fast development of deep learning neural networks have successfully applied to speech recognition BIBREF8 BIBREF9 BIBREF10 or NLP tasks BIBREF11 BIBREF12 A number of recent efforts have explored various ways to understand multimedia in text form BIBREF13 BIBREF14 BIBREF15 BIBREF16 BIBREF17 BIBREF18 They incorporated attention mechanisms BIBREF16 with Long ShortTerm Memory based networks BIBREF19 In Question Answering field most of the works focused on understanding text documents BIBREF20 BIBREF21 BIBREF22 BIBREF23 Even though BIBREF24 tried to answer the question related to the movie they only used the text and image in the movie for that It seems that none of them have studied and focused on comprehension of spoken content yet In this paper we develop and propose a new task of machine comprehension of spoken content which was never mentioned before to our knowledge We take TOEFL listening comprehension test as an corpus for this work TOEFL is an English examination which tests the knowledge and skills of academic English for English learners whose native languages is not English In this examination the subjects would first listen to an audio story around five minutes and then answer several question according to that story The story is related to the college life such as conversation between the student and the professor or a lecture in the class Each question has four choices where only one is correct An real example in the TOEFL examination is shown in Fig 1 The upper part is the manual transcription of a small part of the audio story The questions and four choices are listed too The correct choice to the question in Fig 1 is choice A The questions in TOEFL are not simple even for a human with relatively good knowledge because the question cannot be answered by simply matching the words in the question and in the choices with those in the story and key information is usually buried by many irrelevant utterances To answer the questions like Why does the student go to professors office the listeners have to understand the whole audio story and draw the inferences to answer the question correctly As a result this task is believed to be very challenging for the stateoftheart spoken language understanding technologies We propose a listening comprehension model for the task defined above the Attentionbased Multihop Recurrent Neural Network AMRNN framework and show that this model is able to perform reasonably well for the task In the proposed approach the audio of the stories is first transcribed into text by ASR and the proposed model is developed to process the transcriptions for selecting the correct answer out of 4 choices given the question The initial experiments showed that the proposed model achieves encouraging scores on the TOEFL listening comprehension test The attentionmechanism proposed in this paper can be applied on either word or sentence levels We found that sentencelevel attention achieved better results on the manual transcriptions without ASR errors but wordlevel attention outperformed the sentencelevel on ASR transcriptions with errors The overall structure of the proposed model is in Fig 2 The input of model includes the transcriptions of an audio story a question and four answer choices all represented as word sequences The word sequence of the input question is first represented as a question vector VQ in Section Experiments With the question vector VQ the attention mechanism is applied to extract the questionrelated information from the story in Section Story Attention Module The machine then goes through the story by the attention mechanism several times and obtain an answer selection vector VQn in Section Hopping This answer selection vector VQn is finally used to evaluate the confidence of each choice in Section Answer Selection and the choice with the highest score is taken as the output All the model parameters in the above procedure are jointly trained with the target where 1 for the correct choice and 0 otherwise Fig 3 A shows the procedure of encoding the input question into a vector representation VQ The input question is a sequence of T words w1w2wT every word Wi represented in 1OfN encoding A bidirectional Gated Recurrent Unit GRU network BIBREF25 BIBREF26 BIBREF27 takes one word from the input question sequentially at a time In Fig 3 A the hidden layer output of the forward GRU green rectangle at time index t is denoted by yft and that of the backward GRU blue rectangle is by ybt After looking through all the words in the question the hidden layer output of forward GRU network at the last time index yfT and that of backward GRU network at the first time index yb1 are concatenated to form the question vector representation VQ or VQ yfT Vert yb1 Fig 3 B shows the attention mechanism which takes the question vector VQ obtained in Fig 3 A and the story transcriptions as the input to encode the whole story into a story vector representation VS The story transcription is a very long word sequence with many sentences so we only show two sentences each with 4 words for simplicity There is a bidirectional GRU in Fig 3 B encoding the whole story into a story vector representation VS The word vector representation of the t th word St is constructed by concatenating the hidden layer outputs of forward and backward GRU networks that is St yft Vert ybt Then the attention value alpha t for each time index t is the cosine similarity between the question vector VQ and the word vector representation St of each word VS0 With attention values VS2 there can be two different attention mechanisms wordlevel and sentencelevel to encode the whole story into the story vector representations VS3 Wordlevel Attention We normalize all the attention values alpha t into alpha tprime such that they sum to one over the whole story Then all the word vector St from the bidirectional GRU network for every word in the story are weighted with this normalized attention value alpha tprime and sum to give the story vector that is VS sum talpha tprime St Sentencelevel Attention Sentencelevel attention means the model collects the information only at the end of each sentence Therefore the normalization is only performed over those words at the end of the sentences to obtain alpha tprime prime The story vector representation is then VS sum teosalpha tprime prime St where only those words at the end of sentences eos contribute to the weighted sum So VS alpha 4prime prime S4 alpha 8prime prime S8 in the example of the Fig 3 The overall picture of the proposed model is shown in Fig 2 in which Fig 3 A and B are component modules labeled as Fig 3 A and B of the complete proposed model In the left of Fig 2 the input question is first converted into a question vector VQ0 by the module in Fig 3 A This VQ0 is used to compute the attention values alpha t to obtain the story vector VS1 by the module in Fig 3 B Then VQ0 and VS1 are summed to form a new question vector VQ1 This process is called the first hop hop 1 in Fig 2 The output of the first hop VQ1 can be used to compute the new attention to obtain a new story vector VS1 This can be considered as the machine going over the story again to refocus the story with a new question vector Again VQ1 and VQ00 are summed to form VQ01 hop 2 After VQ02 hops VQ03 should be predefined the output of the last hop VQ04 is used for the answer selection in the Section Answer Selection As in the upper part of Fig 2 the same way previously used to encode the question into VQ in Fig 3 A is used here to encode four choice into choice vector representations VA VB VC VD Then the cosine similarity between the output of the last hop VQn and the choice vectors are computed and the choice with highest similarity is chosen bullet Dataset Collection The collected TOEFL dataset included 963 examples in total 717 for training 124 for validation 122 for testing Each example included a story a question and 4 choices Besides the audio recording of each story the manual transcriptions of the story are also available We used a pydub library BIBREF28 to segment the full audio recording into utterances Each audio recording has 579 utterances in average There are in average 6577 words in a story 1201 words in question and 1035 words in each choice bullet Speech Recognition We used the CMU speech recognizer Sphinx BIBREF29 to transcribe the audio story The recognition word error rate WER was 3432 bullet Preprocessing We used a pretrained 300 dimension glove vector model BIBREF30 to obtain the vector representation for each word Each utterance in the stories question and each choice can be represented as a fixed length vector by adding the vectors of the all component words Before training we pruned the utterances in the story whose vector representation has cosine distance far from the questions The percentage of the pruned utterances was determined by the performance of the model on the development set The vector representations of utterances questions and choices were only used in this preprocessing stage and the baseline approaches in Section Baselines not used in the proposed model bullet Training Details The size of the hidden layer for both the forward and backward GRU networks were 128 All the bidirectional GRU networks in the proposed model shared the same set of parameters to avoid overfitting We used RmsProp BIBREF31 with initial learning rate of 1e5 with momentum 09 Dropout rate was 02 Batch size was 40 The number of hop was tuned from 1 to 3 by development set We compared the proposed model with some commonly used simple baselines in BIBREF24 and the memory network BIBREF16 bullet Choice Length The most naive baseline is to select the choices based on the number of words in it without listening to the stories and looking at the questions This included i selecting the longest choice ii selecting the shortest choice or iii selecting the choice with the length most different from the rest choices bullet WithinChoices similarity With the vector representations for the choices in preprocessing of Section Experimental Setup we computed the cosine distance among the four choices and selected the one which is i the most similar to or ii the most different from the others bullet Question and Choice Similarity With the vector representations for the choices and questions in preprocessing of Section Experimental Setup the choice with the highest cosine similarity to the question is selected bullet Sliding Window BIBREF24 BIBREF32 This model try to found a window of W utterances in the story with the maximum similarity to the question The similarity between a window of utterances and a question was the averaged cosine similarity of the utterances in the window and the question by their glove vector representation After obtaining the window with the largest cosine similarity to the question the confidence score of each choice is the average cosine similarity between the utterances in the window and the choice The choice with the highest score is selected as the answer bullet Memory Network BIBREF16 We implemented the memory network with some modifications for this task to find out if memory network was able to deal it The original memory network didnt have the embedding module for the choices so we used the module for question in the memory network to embed the choices Besides in order to have the memory network select the answer out of four choices instead of outputting a word in its original version we computed the cosine similarity between the the output of the last hop and the choices to select the closest choice as the answer We shared all the parameters of embedding layers in the memory network for avoiding overfitting Without this modification very poor results were obtained on the testing set The embedding size of the memory network was set 128 stochastic gradient descent was used as BIBREF16 with initial learning rate of 001 Batch size was 40 The size of hop was tuned from 1 to 3 by development set We used the accuracy number of question answered correctly total number of questions as our evaluation metric The results are showed in Table 1 We trained the model on the manual transcriptions of the stories while tested the model on the testing set with both manual transcriptions column labelled Manual and ASR transcriptions column labelled ASR bullet Choice Length Part a shows the performance of three models for selecting the answer with the longest shortest or most different length ranging from 23 to 35 bullet Within Choices similarity Part b shows the performance of two models for selecting the choice which is most similar to or the most different from the others The accuracy are 3609 and 2787 respectively bullet Question and Choice Similarity In part c selecting the choice which is the most similar to the question only yielded 2459 very close to randomly guess bullet Sliding Window Part d for sliding window is the first baseline model considering the transcription of the stories We tried the window size 123510152030 and found the best window size to be 5 on the development set This implied the useful information for answering the questions is probably within 5 sentences The performance of 3115 and 3361 with and without ASR errors respectively tells how ASR errors affected the results and the task here is too difficult for this approach to get good results bullet Memory Network The results of memory network in part e shows this task is relatively difficult for it even though memory network was successful in some other tasks However the performance of 3917 accuracy was clearly better than all approaches mentioned above and its interesting that this result was independent of the ASR errors and the reason is under investigation The performance was 31 accuracy when we didnt use the shared embedding layer in the memory network bullet AMRNN model The results of the proposed model are listed in part f respectively for the attention mechanism on wordlevel and sentencelevel Without the ASR errors the proposed model with sentencelevel attention gave an accuracy as high as 5167 and slightly lower for wordlevel attention Its interesting that without ASR errors sentencelevel attention is about 25 higher than wordlevel attention Very possibly because that getting the information from the whole sentence is more useful than listening carefully at every words especially for the conceptual and highlevel questions in this task Paying too much attention to every single word may be a bit noisy On the other hand the 3432 ASR errors affected the model on sentencelevel more than on wordlevel This is very possibly because the incorrectly recognized words may seriously change the meaning of the whole sentences However with attention on wordlevel when a word is incorrectly recognized the model may be able to pay attention on other correctly recognized words to compensate for ASR errors and still come up with correct answer Fig 4 shows the visualization of the attention weights obtained for a typical example story in the testing set with the proposed AMRNN model using wordlevel or sentencelevel attention on manual or ASR transcriptions respectively The darker the color the higher the weights Only a small part of the story is shown where the response of the model made good difference This story was mainly talking about the thick cloud and some mysteries on Venus The question for this story is What is a possible origin of Venusclouds and the correct choice is Gases released as a result of volcanic activity In the manual transcriptions cases left half of Fig 4 both models with wordlevel or sentencelevel attention answered the question right and focused on the core and informative wordssentences to the question The sentencelevel model successfully captured the sentence including volcanic eruptions often omits gases while the wordlevel model captured some important key words like volcanic eruptions emit gases However in ASR cases right half of Fig 4 the ASR errors misled both models to put some attention on some irrelevant wordssentences The sentencelevel model focus on the irrelevant sentence In other area you got canyons while the wordlevel model focused on some irrelevant words canyons rift malaise but still capture some correct important words like volcanic or eruptions to answer correctly By the darkness of the color we can observe that the problem caused by ASR errors was more serious for the sentencelevel attention when capturing the key concepts needed for the question This may explain why in part f of Table 1 we find degradation caused by ASR errors was less for wordlevel model than for sentencelevel model In this paper we create a new task with the TOEFL corpus TOEFL is an English examination where the English learner is asked to listen to a story up to 5 minutes and then answer some corresponding questions The learner needs to do deduction logic and summarization for answering the question We built a model which is able to deal with this challenging task On manual transcriptions the proposed model achieved 5156 accuracy while the very capable memory network got only 3917 accuracy Even on ASR transcriptions with WER of 3432 the proposed model still yielded 4833 accuracy We also found that although sentencelevel attention achieved the best results on the manual transcription wordlevel attention outperformed the sentencelevel when there were ASR errors </s> |
<s>[INST] Principles for Developing a Knowledge Graph of Interlinked Events from News Headlines on Twitter [/INST] Several successful efforts have led to publishing huge RDF Resource Description Framework datasets on Linked Open Data LOD such as DBpedia BIBREF0 and LinkedGeoData BIBREF1 However these sources are limited to either structured or semistructured data So far a significant portion of the Web content consists of textual data from social network feeds blogs news logs etc Although the Natural Language Processing NLP community has developed approaches to extract essential information from plain text eg BIBREF2 BIBREF3 BIBREF4 there is convenient support for knowledge graph construction Further several lexical analysis based approaches extract only a limited form of metadata that is inadequate for supporting applications such as question answering systems For example the query Give me the list of reported events by BBC and CNN about the number of killed people in Yemen in the last four days about a recent event containing restrictions such as location and time poses several challenges to the current state of Linked Data and relevant information extraction techniques The query seeks fresh information eg last four days whereas the current version of Linked Data is encyclopedic and historical and does not contain appropriate information present in a temporally annotated data stream Further the query specifies provenance eg published by BBC and CNN that might not always be available on Linked Data Crucially the example query asks about a specific type of event ie reports of war caused killing people with multiple arguments eg in this case location argument occurred in Yemen In spite of recent progress BIBREF5 BIBREF6 BIBREF7 there is still no standardized mechanism for i selecting background data model ii recognizing and classifying specific event types iii identifying and labeling associated arguments ie entities as well as relations iv interlinking events and v representing events In fact most of the stateoftheart solutions are ad hoc and limited In this paper we provide a systematic pipeline for developing knowledge graph of interlinked events As a proofofconcept we show a case study of headline news on Twitter The main contributions of this paper include The remainder of this paper is organized as follows Section SECREF2 is dedicated to notation and problem statement Section SECREF3 outlines the required steps for developing a knowledge graph of interlinked events Section SECREF4 frames our contribution in the context of related work Section SECREF5 concludes the paper with suggestions for future work A tweet of a news headline contains a sequence of words INLINEFORM0 tabtweetsamples provides samples of news headlines on Twitter with provenance information such as publisher and publishing date These were sampled for the type of embedded event discussed below We aim to create an RDF knowledge base for such news headlines An RDF knowledge base INLINEFORM1 consists of a set of triples INLINEFORM2 where INLINEFORM3 is the union of all RDF resources INLINEFORM4 are respectively a set of classes properties and instances and INLINEFORM5 is a set of literals INLINEFORM6 We aim to extract rich set of triples INLINEFORM7 from each tweet INLINEFORM8 in the stream of news headline tweets as discussed below and populate an event knowledge graph INLINEFORM9 Formally the extraction task can be captured as INLINEFORM10 where INLINEFORM11 is the stream of news headline tweets and INLINEFORM12 is a knowledge graph of events where a tweet INLINEFORM13 is mapped to a single event We address three main challenges on the way 1 agreeing upon a background data model either by developing or reusing one 2 annotating events associated entities as well as relations 3 interlinking events across time and media and 4 publishing triples on the event knowledge graph according to the principles of Linked Open Data Here we outline the required steps for developing a knowledge graph of interlinked events Figure FIGREF2 illustrates the highlevel overview of the full pipeline This pipeline contains the following main steps to be discussed in detail later 1 Collecting tweets from the stream of several news channels such as BBC and CNN on Twitter 2 Agreeing upon background data model 3 Event annotation potentially contains two subtasks i event recognition and ii event classification 4 Entityrelation annotation possibly comprises a series of tasks as i entity recognition ii entity linking iii entity disambiguation iv semantic role labeling of entities and v inferring implicit entities 5 Interlinking events across time and media 6 Publishing event knowledge graph based on the best practices of Linked Open Data An initial key question is What is the suitable background data model serving as the pivot for extracting triples associated to an event Contemporary approaches to extracting RDF triples capture entities and relations in terms of binary relations BIBREF8 BIBREF9 BIBREF10 We divide the current triplebased extraction approaches into two categories i those that eg BIBREF8 follow the pattern INLINEFORM0 to leverage existing relations ie properties INLINEFORM1 in the knowledge base to find the entities INLINEFORM2 and INLINEFORM3 for which the relation INLINEFORM4 holds For example for the relation plays holds between an athlete and hisher favorite sport and NELL extracts the triple seve ballesteros plays golf for two entities seve ballesteros and golf and ii others that eg BIBREF11 BIBREF9 utilize the pattern INLINEFORM5 to leverage the entities available in the knowledge graph ie INLINEFORM6 to infer new relations eg INLINEFORM7 that either did not exist in the knowledge base or did not hold between the entities INLINEFORM8 For example BIBREF11 initially recognizes named entities in a given sentence and then by inferring over domains and ranges of properties in DBpedia assigns an appropriate property between the recognized entities Given an entity eg Garry Marshall with type director associated with a known movie eg Pretty woman it infers the property dbpediadirector from background ontology between the two recognized entities Garry Marshall and Pretty woman So far supervised and unsupervised learning approaches have been applied for these extractions which rely on the use of a large number of specific lexical syntactical and semantic features We assume that each news headline maps to an event modeled by an nary relation that can be captured by generating multiple triples An INLINEFORM9 ary relation is a relation with n arguments INLINEFORM10 For example a binary relation triple INLINEFORM11 can be rewritten as INLINEFORM12 Thus the first challenge concerns the suitable background data model for representing various types of events and their associated entities by simulating nary relationships in terms of binary relationships Considering our case study news headlines are often one single sentence potentially accompanied by subordinate clauses along with a link directing to the body of the news report In spite of its brevity headline tweets provide dense and significant information Various entities appear in the embedded core message the latter commonly as verb phrase including aspects that indicate temporal properties location and agent For example consider the tweet no2 in tabtweetsamples that will serve as a running example Instagram CEO meets with Pontifex to discuss the power of images to unite people that contains several entities related to the verb phrase meet and are distinguished by separating boxes as baselineXbase X draw shaperectangle inner sep0 Instagram CEO baselineXbase X draw shaperectangle inner sep0 meets with baselineXbase X draw shaperectangle inner sep0 Pontifex baselineXbase X draw shaperectangle inner sep0 to discuss the power of images to unite people The general intuition is that a core verb ie relation heads each headline tweet accompanied by multiple arguments ie entities The number of entities INLINEFORM0 depends on the type of relation but location and time are generic default arguments for any relation INLINEFORM1 Thus the core chunk verb phrase corresponds to the meet event and the remaining chunks of the given tweet likely function as dependent entities of this event For instance in the running example the chunk baselineXbase X draw shaperectangle inner sep0 meets corresponds to the event INLINEFORM2 with the following recognized entities as associated arguments DISPLAYFORM0 In this example the temporal as well as location arguments of INLINEFORM0 are absent Consistent with linguistic theory not all arguments are always present for each occurrence of an event The RDF and OWL Web Ontology language primarily allow binary relations defined as a link between either two entities or an entity and its associated property value However in the domain of news we often encounter events that involve more than two entities and hence require nary relations The W3C Working group Note suggests two patterns for dealing with nary relations We prefer the first pattern that creates INLINEFORM0 classes and INLINEFORM1 new properties to represent an nary relation We formally define a generic event class representing all categories of events nary relations and then use a templatebased definition for any subclass of the generic event This enables the representation of specific types of events eg meet event Definition 1 Class of Generic Event A generic event class refers to any event that can involve n multiple entities In other words the Generic Event Class denoted by INLINEFORM0 abstracts a relation among n entities Definition 2 Class of X Event X Event denoted by INLINEFORM0 is a subclass ie specific type of the class INLINEFORM1 ie INLINEFORM2 Conceptually it refers to events sharing common behavior semantics and consequences In the following we provide requirements on the data model for developing a knowledge graph of interlinked events Requirement 1 Inclusion of Generic Event An event data model minimally includes the definition of the generic event while including the specific event as optional Requirement 2 Inclusion of Provenance The provenance of each event must be represented within the data model Requirement 3 Inclusion of Entity Type The type of each entity associated with a given event must be represented within the data model This type can be finegrained or coarsegrained Requirement 4 Inclusion of Properties For any given entity INLINEFORM0 associated with a given event INLINEFORM1 a property ie binary relation INLINEFORM2 between the entity INLINEFORM3 and the event INLINEFORM4 must be represented within the data model Thus for the given pair INLINEFORM5 either the triple INLINEFORM6 or the triple INLINEFORM7 is entailed in the RDF graph of INLINEFORM8 In this part we review a number of stateoftheart event ontologies In 2009 UC Berkeley introduced the LODE ontology In this ontology an event is defined as an action which takes place at a certain time at a specific location It can be a historical action as well as a scheduled action There were previous models BIBREF12 BIBREF13 for representing historic events and scheduled events Some of them represent both types of events ie historical and scheduled eg EventsMLG2 The LODE ontology proposed to build an interlingua model ie a model which encapsulates the overlap among different ontologies eg CIDOC CRM ABC Ontology Event Ontology and EventsMLG2 This encapsulation is utilized to create a mapping among existing ontologies LODE was introduced to publish historical events in a finegrained manner as it assumes each event is a unique event even if it is a part of a series Because the concept of subevents does not exist in LODE related events can be interlinked This ontology helps us to link factual aspects of a historical event A factual aspect is given by What happened event Where did it happen atPlace When did it happen atTime Who was involved involvedAgent BIBREF14 A visualization of LODE ontology is shown in Figure 2 We conclude that LODE meets i Requirement 1 as it defines a generic concept of the historic event ii loosely Requirement 3 as it contains generic types for entities eg Agent SpatialThing TemporalEntity iii Requirement 4 as it includes necessary relations But LODE ontology fails to meet Requirement 2 as it does not include the publisher of the event provenance Figure 3 depicts our running example in LODE In 2011 SEM ontology was introduced from Vrije University and Delft This ontology describes events as the central element in representing historical data cultural heritage BIBREF15 BIBREF16 and multimedia BIBREF17 SEM is combined with a Prolog API to create event instances without the background knowledge This API also helps in connecting the created event instances to Linked Open Data SEM proposes a method to attain interoperability among datasets from different domains SEM strives to remove constraints to make it reusable by supporting weak semantics Thus in SEM the concept of event is specified as everything that happens BIBREF18 A schematic representation of SEM model is shown in figsem summarized version We conclude that SEM meets i Requirement 1 as it defines generic event ii Requirement 3 as it specifies a type for entities eg Actor and iii Requirement 4 as it includes required properties Similar to LODE ontology SEM model fails to meet Requirement 2 as it does not include the publisher of events provenance Fig The DBpedia ontology defines the generic concept of event with a hierarchy which is broader including lifecycle events eg birth death natural events eg earthquake stormsurge and societal events eg concert election We conclude that DBpedia meets i Requirement 1 as it defines generic event ii Requirement 3 as it specifies a type for entities and iii Requirement 4 as it includes required properties All these can be imported from other datasets present on the Web as DBpedia links to other datasets in an easy manner Similar to LODE ontology and SEM model DBpedia fails to meet Requirement 2 as it does not include the publisher of events provenance Schemaorg a product of collaborative efforts by major companies ie Google Bing Yahoo and Yandex presents similar generic concept of event It considers temporal as well as location aspects and additionally provides a limited hierarchy This hierarchy introduces types of events such as business events sale events and social events The schemas in schemaorg are set of these types which are associted with a set of properties Furthermore it considers multiple labels between the associated entity and the concept of the event represented in figschemaorg such as actor and contributor which distinguishes the role of the associated entity Schemaorg introduces hundreds of schemas for categories like movies music organizations TV shows products places etc For Schemaorg an event is an instance taking place at a certain time and at a certain location Like LODE the repeated events are classified different events and thus keeping all the events unique even if it is a sub event A schematic representation of Schemaorg summarized version is shown in figschemaorg We conclude that Schemaorg meets i Requirement 1 as it defines generic event ii Requirement 3 as it specifies a type for entities eg Actor as type Person Location as type Place Organizer as type Person StartDate as type Date or DateTime etc and iii Requirement 4 as it includes required properties for every entities defined above Like LODE SEM and DBPedia Schemaorg also fails in meeting Requirement 2 as it can define or import publisher of the event provenance The CEVO ontology relies on an abstract conceptualization of English verbs provided by Beth Levin BIBREF19 Levin categorizes English verbs according to shared meaning and behavior CEVO ontology which is a machinereadable format ie RDF format of Levin s categorization presents more than 230 event classes for over 3000 English verbs individuals It organizes classes into semantically coherent event classes and event hierarchy and notably has an inventory of the corresponding lexical items For example tabthreeVerbClasses in the first column presents three event classes as i Communication event that corresponds to the event which causes transferring a message ii Meet event which is an event related to group activities and iii Murder event which is referring to an event that describing killing The second column of tabthreeVerbClasses represents the lexical items ie verbs having shared meaning and are under the umbrella of a common event In other words an appearance of one of these verbs shows the occurrence of its associated event For example wrt the running example the appearance of the verb meet in the given tweet shows the occurrence of an event with the specific type meet The CEVO ontology can be employed for recognizing events and more interesting classifying them wrt their specific type Specifically it unifies apparently disparate lexical items under a single event class More importantly this can prove critical in reducing the number of apparent features for classifiers and in the support of inference necessary for query response The existing data models are basically coarsegrained In case the domain or application requires a finegrained data model the existing data models can be extended For example here we extended event data model from CEVO ontology for three specific events We take into account three subclasses shown in Figure UID50 as i class communication INLINEFORM0 that refers to any event transferring a message ii class meet INLINEFORM1 that ranges over all group activities and finally iii class murder INLINEFORM2 that includes any reports of killing Furthermore as Figure UID50 shows the provenance information eg publisher or date is represented within the data model default arguments for all events to meet Requirement reqprov Figure FIGREF49 bd represents parts of data model for subevent classes ie INLINEFORM0 in detail The type of all possible associated entities as well as their necessary relationships are represented within the data model This meets the Requirements SECREF22 and SECREF23 For example the meet event is associated with entities with type of Participant and Topic ie topic discussed in the meeting Considering the sample of tweets in Table TABREF9 the tweets no1 no4 and no7 are instances of the event Communication with the mentions tell say announce The tweets no2 no5 no8 are instances of the event Meet with the mentions meet visit The tweets no3 no6 no9 are instances of the event Murder with the mention kill figexam demonstrates the running example within the developed data model This event has two participants ie instagram CEO and Pontifex along with a specific topic We can adopting the concept of a singleton property introduced in BIBREF20 for modeling nary relations in the background data model Singleton properties replace RDF reifications and enable efficient represention of statements about statements Since news headlines contain both provenance information and multiple associated entities SP is a suitable choice and furthermore it enable systematic encoding of nary relations in terms of binary relations Example 1 InputOutput Considering our running example which is about the occurrence of a meet event with two participant entities Instagram CEO and Pontifex and the topic INLINEFORM0 The generated triples using singleton property are as follows 1 Meet1 singletonPropertyOf Meet 2 InstagramCEO Meet1 Pontifex 3 Meet1 about t1 4 Meet1 hasSource CNN 5 Meet1 extractedOn 2622106 6 t1 a Topic 7 t1 body to discuss the power of images to unite people Events can be represented at different levels of granularity The event annotation task potentially comprises of two subsequent tasks as follows Event recognition Typically event recognition utilizes phrases and their parts of speech Although verbs are more common for distinguishing an event eg Obama met Merkel in Berlin the other POS might reveal an event eg G8 meeting in Berlin Furthermore event recognition task ecan beither open domain or closed domain In the former one collecting a lexicon of event phrases is more challenging rather than for the latter one In any case a learning approach either supervised or semisupervised can be applied for determining whether or not a piece of text contains an event phrase or not Event classification This task is necessary in case the employed background data model considers the specific type of events as part of event annotation In this case event phrases have to be labeled by specific types of events using multiclass classifier trained for distinguishing the specific type of a given event For example the tweets no2 no5 no8 of tabtweetsamples have the specific type meet Entity annotation is a significant task for creating a knowledge graph of events It can be challenging when we have a finegrained background data model which makes the task of semantic role labeling of entities necessary Overall the required tasks for fulfilling entity annotation are as follows Entity recognition This task specifies a chunk of text as an individual entity which plays a role in the occurred event An entity mention can be explicit or implicit Regarding explicit entities Named Entity Recognition NER tools can be used for open domain scenarios whereas alternatives such as knowledge graphs gazetteers and domain dictionaries are necessary for closed domain scenarios Eg for the tweet no1 in tabtweetsamples the chunk Michelle Obama is recognized as a named entity with the type person Entity linking Entity linking can be attributed into two tasks the first one BIBREF21 which is required in our case is about associating entity mentions in a given text to their appropriate corresponding entities in a given knowledge graph Thus it removes ambiguity A textual mention of an entity might have a matching entity in the knowledge graph or not In the former case entity linking task is reduced to hook a suitable entity whereas in the latter case it is required that a new IRI ie International Resource Identifier be minted and typed and then linked to the textual mention of the given entity Eg in the tweet no1 of tabtweetsamples the named entity Michelle Obama should be linked to the entity dbrMichelleObama when DBpedia is employed as the background knowledge graph The second type of entity linking is about linking entities across knowledge graphs using owlsameAs links While the first task is required in the pipeline of developing an event knowledge graph the second one is optional but can enhance quality and visibility of the underlying knowledge graph Semantic role labeling Most of the existing event ontologies consider generic roles such as actor or agent for involved entities For finegrained background data model the semantic role labeling can be done Eg wrt the tweet no1 in tabtweetsamples the entity Michelle Obama can be labelled by the generic role actor employing LODE ontology or the specific role giver applying the data model illustrated in figcommunicationpattern Entity disambiguation An entity mention in a text might be polysemous thus linking to the correct entity in the underlying knowledge graph requires a disambiguation phase Furthermore a single entity in multiple knowledge graphs might have various representations Thus interlinking them is challenging and requires a disambiguation phase as well BIBREF22 BIBREF23 BIBREF24 Eg wrt the tweet no7 in tabtweetsamples the named entity Obama is ambiguous as of whether it refers to Michelle Obama or Barack Obama Regarding context ie the remaining part of the tweet it likely refers to Barack Obama Implicit entity linking As we mentioned before not all of the mentions of entities are explicit For example wrt the running example the chunk Instagram CEO refers to the implicit entity Kevin Systrom who is the CEO of Instagram The experiment performed in BIBREF25 shows that 21 entity mentions in movie domain and 40 of entity mentions in Book domain are implicit Inferring implicit entities depends on capturing context as well as respecting time intervals The tasks described above have been considered independently before The interlinking requirement which has not hyet been adequately explored comes from the two inherent facts of events as follows A single event might be reported by various publisher sources using different expressions Thus it is necessary to identify same events across various publisher sources and then interlink them using owlsameAs or skosrelated links Events have an evolutionary nature in the sense that more information is added with time Thus it is essential to spot an event and its subsequent events reported to either complement the original event or reflect its causes or consequences To interlink such events skosrelated can be utilized The recognized events entities and relations have to be published according to principles of LOD RDF and the employed background data model To maintain the knowledge graphs consistency and coherence the generated triples must be deduplicated validated and assigned URIs disambiguated The minted URI should be dereferenceable and interlinked to external RDF data sources Overall there is a lack of a holistic view on event extraction from free text and subsequently developing a knowledge graph from it In this paper we presented the full pipeline containing the required tasks such as i agreeing upon a data model ii event annotation iii entity annotation and iv interlinking events The majority of previous research is either domainspecific or eventspecific and do not undertake the full pipeline eg limited to only event and entity extraction We have provided a visionary review of the full pipeline which is merely applicable to any domain In the following we initially refer to research approaches for nary relation extraction on particular domains then we refer the prominent approaches of binary relation extraction We end by citing successful attempts at triple extraction from structured and semistructured data sources The work presented in BIBREF26 introduces complex relations as nary relations between ntyped entities It proposes to factorize all complex relations into a set of binary relations Then a classifier is trained to recognize related entities of binary relations After identifying all pairs of related entities for binary relations it reconstructs the complex relation using a simple graph creation approach Another domain for extracting nary relations is proteinprotein interactions in the biomedical literature BIBREF27 BIBREF28 BIBREF29 They first identify protein mentions in text and then recognize interaction relations before finally extracting interactions The approaches employed for proteinprotein interactions can be divided into three groups i graphbased approaches eg cooccurrence graph ii rulebased approaches and iii learning approaches eg maximum entropy The other category of event extraction is based on binary relation extraction NELL NeverEnding Language Learning BIBREF8 is a learning agent that extracts new facts using existing binary relations in its knowledge base It was initiated in 2010 using a couple of seed binary relations but after years of running has become selflearning A notable feature of NELL is its dynamic approach for extracting facts as it refreshes beliefs in its knowledge base and removes the incorrect or old ones Linked Open Data as a valuable source of diverse ontologies also can be employed for extracting either new facts or new relations The framework proposed in BIBREF11 BIBREF9 extracts facts using binary relations from DBpedia as background knowledge In contrast to NELL it initially identifies Named Entities and their type on plain text then it tries to infer mentions of relation expression to properties in DBpedia eg taking the domain and range of properties into account Open Information Extraction BIBREF10 is another extraction framework that is not limited to any predefined relation set Furthermore extracting triples from structured as well as semistructured data sources has received adequate attention in the past especially DBpedia BIBREF0 and LinkedGeo Data BIBREF1 that leverage the loose structure of data for extraction Another example is the work BIBREF30 which presents a holistic approach for extraction of RDF from templated websites In this paper we presented the initial version of our framework for the realtime extraction of events This framework is part of our project HeadEx for developing a knowledge graph of interlinked events We presented the requirements for choosing a data model representing events and their arguments We reviewed the existing data models which have been employed by the stateoftheart applications Furthermore we outlined the required tasks for annotating events as well entities Then the interlinking strategies were discussed As a proofofconcept we followed a case study of news headlines on Twitter For our future agenda we plan to develop the envisioned pipeline containing all the required tasks by either implementing new components or integrating the existing ones </s> |
<s>[INST] Meta Relational Learning for Few-Shot Link Prediction in Knowledge Graphs [/INST] A knowledge graph is composed by a large amount of triples in the form of head entity relation tail entity h r t in short encoding knowledge and facts in the world Many KGs have been proposed BIBREF0 BIBREF1 BIBREF2 and applied to various applications BIBREF3 BIBREF4 BIBREF5 Although with huge amount of entities relations and triples many KGs still suffer from incompleteness thus knowledge graph completion is vital for the development of KGs One of knowledge graph completion tasks is link prediction predicting new triples based on existing ones For link prediction KG embedding methods BIBREF6 BIBREF7 BIBREF8 BIBREF9 are promising ways They learn latent representations called embeddings for entities and relations in continuous vector space and accomplish link prediction via calculation with embeddings The effectiveness of KG embedding methods is promised by sufficient training examples thus results are much worse for elements with a few instances during training BIBREF10 However fewshot problem widely exists in KGs For example about 10 of relations in Wikidata BIBREF0 have no more than 10 triples Relations with a few instances are called fewshot relations In this paper we devote to discuss fewshot link prediction in knowledge graphs predicting tail entity t given head entity h and relation r by only observing K triples about r usually K is small Figure 1 depicts an example of 3shot link prediction in KGs To do fewshot link prediction BIBREF11 made the first trial and proposed GMatching learning a matching metric by considering both learned embeddings and onehop graph structures while we try to accomplish fewshot link prediction from another perspective based on the intuition that the most important information to be transferred from a few existing instances to incomplete triples should be the common and shared knowledge within one task We call such information relationspecific meta information and propose a new framework Meta Relational Learning MetaR for fewshot link prediction For example in Figure 1 relationspecific meta information related to the relation CEOof or CountryCapital will be extracted and transferred by MetaR from a few existing instances to incomplete triples The relationspecific meta information is helpful in the following two perspectives 1 transferring common relation information from observed triples to incomplete triples 2 accelerating the learning process within one task by observing only a few instances Thus we propose two kinds of relationspecific meta information relation meta and gradient meta corresponding to afore mentioned two perspectives respectively In our proposed framework MetaR relation meta is the highorder representation of a relation connecting head and tail entities Gradient meta is the loss gradient of relation meta which will be used to make a rapid update before transferring relation meta to incomplete triples during prediction Compared with GMatching BIBREF11 which relies on a background knowledge graph our MetaR is independent with them thus is more robust as background knowledge graphs might not be available for fewshot link prediction in real scenarios We evaluate MetaR with different settings on fewshot link prediction datasets MetaR achieves stateoftheart results indicating the success of transferring relationspecific meta information in fewshot link prediction tasks In summary main contributions of our work are threefolds One target of MetaR is to learn the representation of entities fitting the fewshot link prediction task and the learning framework is inspired by knowledge graph embedding methods Furthermore using loss gradient as one kind of meta information is inspired by MetaNet BIBREF12 and MAML BIBREF13 which explore methods for fewshot learning by metalearning From these two points we regard knowledge graph embedding and metalearning as two main kinds of related work Knowledge graph embedding models map relations and entities into continuous vector space They use a score function to measure the truth value of each triple h r t Same as knowledge graph embedding our MetaR also need a score function and the main difference is that representation for r is the learned relation meta in MetaR rather than embedding of r as in normal knowledge graph embedding methods One line of work is started by TransE BIBREF6 with distance score function TransH BIBREF14 and TransR BIBREF15 are two typical models using different methods to connect head tail entities and their relations DistMult BIBREF9 and ComplEx BIBREF8 are derived from RESCAL BIBREF7 trying to mine latent semantics in different ways There are also some others like ConvE BIBREF16 using convolutional structure to score triples and models using additional information such as entity types BIBREF17 and relation paths BIBREF18 BIBREF19 comprehensively summarize the current popular knowledge graph embedding methods Traditional embedding models are heavily rely on rich training instances BIBREF20 BIBREF11 thus are limited to do fewshot link prediction Our MetaR is designed to fill this vulnerability of existing embedding models Metalearning seeks for the ability of learning quickly from only a few instances within the same concept and adapting continuously to more concepts which are actually the rapid and incremental learning that humans are very good at Several metalearning models have been proposed recently Generally there are three kinds of metalearning methods so far 1 Metricbased metalearning BIBREF21 BIBREF22 BIBREF23 BIBREF11 which tries to learn a matching metric between query and support set generalized to all tasks where the idea of matching is similar to some nearest neighbors algorithms Siamese Neural Network BIBREF21 is a typical method using symmetric twin networks to compute the metric of two inputs GMatching BIBREF11 the first trial on oneshot link prediction in knowledge graphs learns a matching metric based on entity embeddings and local graph structures which also can be regarded as a metricbased method 2 Modelbased method BIBREF24 BIBREF12 BIBREF25 which uses a specially designed part like memory to achieve the ability of learning rapidly by only a few training instances MetaNet BIBREF12 a kind of memory augmented neural network MANN acquires meta information from loss gradient and generalizes rapidly via its fast parameterization 3 Optimizationbased approach BIBREF13 BIBREF26 which gains the idea of learning faster by changing the optimization algorithm ModelAgnostic MetaLearning BIBREF13 abbreviated as MAML is a modelagnostic algorithm It firstly updates parameters of taskspecific learner and metaoptimization across tasks is performed over parameters by using above updated parameters its like a gradient through a gradient As far as we know work proposed by BIBREF11 is the first research on fewshot learning for knowledge graphs Its a metricbased model which consists of a neighbor encoder and a matching processor Neighbor encoder enhances the embedding of entities by their onehop neighbors and matching processor performs a multistep matching by a LSTM block In this section we present the formal definition of a knowledge graph and fewshot link prediction task A knowledge graph is defined as follows Definition 31 Knowledge Graph mathcal G A knowledge graph mathcal G lbrace mathcal E mathcal R mathcal TPrbrace mathcal E is the entity set mathcal R is the relation set And mathcal TP lbrace h r tin mathcal E times mathcal R times mathcal Erbrace is the triple set And a fewshot link prediction task in knowledge graphs is defined as Definition 32 Fewshot link prediction task mathcal T With a knowledge graph mathcal G lbrace mathcal E mathcal R mathcal TPrbrace given a support set mathcal Sr lbrace hi tiin mathcal E times mathcal E hi r ti in mathcal TP rbrace about relation rin mathcal R where mathcal Sr K predicting the tail entity linked with relation r to head entity hj formulated as rhj is called Kshot link prediction As defined above a fewshot link prediction task is always defined for a specific relation During prediction there usually is more than one triple to be predicted and with support set mathcal Sr we call the set of all triples to be predicted as query set mathcal Qr lbrace rhj rbrace The goal of a fewshot link prediction method is to gain the capability of predicting new triples about a relation r with only observing a few triples about r Thus its training process is based on a set of tasks mathcal Ttrainlbrace mathcal Tirbrace i1M where each task mathcal Ti lbrace mathcal Si mathcal Qirbrace corresponds to an individual fewshot link prediction task with its own support and query set Its testing process is conducted on a set of new tasks mathcal Ttest lbrace mathcal Tjrbrace j1N which is similar to mathcal Ttrain other than that mathcal Tj in mathcal Ttest should be about relations that have never been seen in mathcal Ttrain Table 1 gives a concrete example of the data during learning and testing for fewshot link prediction To make one model gain the fewshot link prediction capability the most important thing is transferring information from support set to query set and there are two questions for us to think about 1 what is the most transferable and common information between support set and query set and 2 how to learn faster by only observing a few instances within one task For question 1 within one task all triples in support set and query set are about the same relation thus it is naturally to suppose that relation is the key common part between support and query set For question 2 the learning process is usually conducted by minimizing a loss function via gradient descending thus gradients reveal how the models parameters should be changed Intuitively we believe that gradients are valuable source to accelerate learning process Based on these thoughts we propose two kinds of meta information which are shared between support set and query set to deal with above problems In order to extract relation meta and gradient mate and incorporate them with knowledge graph embedding to solve fewshot link prediction our proposal MetaR mainly contains two modules The overview and algorithm of MetaR are shown in Figure 2 and Algorithm Method Next we introduce each module of MetaR via one fewshot link prediction task mathcal Tr lbrace mathcal Sr mathcal Qrrbrace tb 1 Learning of MetaR 1 Training tasks mathcal Ttrain Embedding layer emb Parameter of relationmeta learner phi not done Sample a task mathcal Trlbrace mathcal Sr mathcal Qrrbrace from mathcal Ttrain Get mathit R from mathcal Sr Equ 18 Equ 19 Compute loss in mathcal Sr Equ 22 Get mathit G by gradient of mathit R Equ 23 Update emb0 by emb1 Equ 24 Compute loss in emb2 Equ 26 Update emb3 and emb4 by loss in emb5 To extract the relation meta from support set we design a relationmeta learner to learn a mapping from head and tail entities in support set to relation meta The structure of this relationmeta learner can be implemented as a simple neural network In task mathcal Tr the input of relationmeta learner is head and tail entity pairs in support set lbrace hi ti in mathcal Srrbrace We firstly extract entitypair specific relation meta via a L layers fully connected neural network beginaligned |
mathbf x0 mathbf hi oplus mathbf ti |
mathbf xl sigma mathbf Wlmathbf xl1 bl |
mathit Rhi ti mathbf WLmathbf xL1 bL |
endaligned Eq 18 where mathbf hi in mathbb Rd and mathbf ti in mathbb Rd are embeddings of head entity hi and tail entity ti with dimension d respectively L is the number of layers in neural network and l in lbrace 1 dots L1 rbrace mathbf Wl and mathbf bl are weights and bias in layer l We use LeakyReLU for activation mathbf ti in mathbb Rd0 mathbf ti in mathbb Rd1 represents the concatenation of vector mathbf ti in mathbb Rd2 and mathbf ti in mathbb Rd3 Finally mathbf ti in mathbb Rd4 represent the relation meta from specific entity pare mathbf ti in mathbb Rd5 and mathbf ti in mathbb Rd6 With multiple entitypair specific relation meta we generate the final relation meta in current task via averaging all entitypair specific relation meta in current task mathit Rmathcal Tr fracsum i1Kmathit Rhi tiK Eq 19 As we want to get gradient meta to make a rapid update on relation meta we need a score function to evaluate the truth value of entity pairs under specific relations and also the loss function for current task We apply the key idea of knowledge graph embedding methods in our embedding learner as they are proved to be effective on evaluating truth value of triples in knowledge graphs In task mathcal Tr we firstly calculate the score for each entity pairs hi ti in support set mathcal Sr as follows shi ti Vert mathbf hi mathit Rmathcal Tr mathbf ti Vert Eq 21 where Vert mathbf xVert represents the L2 norm of vector mathbf x We design the score function inspired by TransE BIBREF6 which assumes the head entity embedding mathbf h relation embedding mathbf r and tail entity embedding mathbf t for a true triple h r t satisfying mathbf h mathbf r mathbf t Thus the score function is defined according to the distance between mathbf h mathbf r and mathbf t Transferring to our fewshow link prediction task we replace the relation embedding mathbf r with relation meta mathbf x0 as there is no direct general relation embeddings in our task and mathbf x1 can be regarded as the relation embedding for current task mathbf x2 With score function for each triple we set the following loss Lmathcal Sr sum hi tiin mathcal Sr gamma shi tishi tiprime Eq 22 where x represents the positive part of x and gamma represents margin which is a hyperparameter shi tiprime is the score for negative sample hi tiprime corresponding to current positive entity pair hi ti in mathcal Sr where hi r tiprime notin mathcal G Lmathcal Sr should be small for task mathcal Tr which represents the model can properly encode truth values of triples Thus gradients of parameters indicate how should the parameters be updated Thus we regard the gradient of mathit Rmathcal Tr based on Lmathcal Sr as gradient meta mathit Gmathcal Tr vspace284526pt |
mathit Gmathcal Tr nabla mathit Rmathcal Tr Lmathcal Sr Eq 23 Following the gradient update rule we make a rapid update on relation meta as follows mathit Rprime mathcal Tr mathit Rmathcal Tr beta mathit Gmathcal Tr Eq 24 where beta indicates the step size of gradient meta when operating on relation meta When scoring the query set by embedding learner we use updated relation meta After getting the updated relation meta mathit Rprime we transfer it to samples in query set mathcal Qr lbrace hj tj rbrace and calculate their scores and loss of query set following the same way in support set shj tj Vert mathbf hj mathit Rmathcal Trprime mathbf tj Vert Eq 25 Lmathcal Qr sum hj tjin mathcal Qrgamma shj tjshj tjprime Eq 26 where Lmathcal Qr is our training objective to be minimized We use this loss to update the whole model During training our objective is to minimize the following loss L which is the sum of query loss for all tasks in one minibatch L sum mathcal Sr mathcal Qrin mathcal Ttrain Lmathcal Qr Eq 28 With MetaR we want to figure out following things 1 can MetaR accomplish fewshot link prediction task and even perform better than previous model 2 how much relationspecific meta information contributes to fewshot link prediction 3 is there any requirement for MetaR to work on fewshot link prediction To do these we conduct the experiments on two fewshot link prediction datasets and deeply analyze the experiment results We use two datasets NELLOne and WikiOne which are constructed by BIBREF11 NELLOne and WikiOne are derived from NELL BIBREF2 and Wikidata BIBREF0 respectively Furthermore because these two benchmarks are firstly tested on GMatching which consider both learned embeddings and onehop graph structures a background graph is constructed with relations out of trainingvalidationtest sets for obtaining the pretrain entity embeddings and providing the local graph for GMatching Unlike GMatching using background graph to enhance the representations of entities our MetaR can be trained without background graph For NELLOne and WikiOne which have background graph originally we can make use of such background graph by fitting it into training tasks or using it to train embeddings to initialize entity representations Overall we have three kinds of dataset settings shown in Table 3 For setting of BGInTrain in order to make background graph included in training tasks we sample tasks from triples in background graph and original training set rather than sampling from only original training set Note that these three settings dont violate the task formulation of fewshot link prediction in KGs The statistics of NELLOne and WikiOne are shown in Table 2 We use two traditional metrics to evaluate different methods on these datasets MRR and HitsN MRR is the mean reciprocal rank and HitsN is the proportion of correct entities ranked in the top N in link prediction During training minibatch gradient descent is applied with batch size set as 64 and 128 for NELLOne and WikiOne respectively We use Adam BIBREF27 with the initial learning rate as 0001 to update parameters We set gamma 1 and beta 1 The number of positive and negative triples in query set is 3 and 10 in NELLOne and WikiOne Trained model will be applied on validation tasks each 1000 epochs and the current model parameters and corresponding performance will be recorded after stopping the model that has the best performance on Hits10 will be treated as final model For number of training epoch we use early stopping with 30 patient epochs which means that we stop the training when the performance on Hits10 drops 30 times continuously Following GMatching the embedding dimension of NELLOne is 100 and WikiOne is 50 The sizes of two hidden layers in relationmeta learner are 500 200 and 250 100 for NELLOne and WikiOne The results of two fewshot link prediction tasks including 1shot and 5shot on NELLOne and WikiOne are shown in Table 4 The baseline in our experiment is GMatching BIBREF11 which made the first trial on fewshot link prediction task and is the only method that we can find as baseline In this table results of GMatching with different KG embedding initialization are copied from the original paper Our MetaR is tested on different settings of datasets introduced in Table 3 In Table 4 our model performs better with all evaluation metrics on both datasets Specifically for 1shot link prediction MetaR increases by 33 281 292 and 278 on MRR Hits10 Hits5 and Hits1 on NELLOne and 414 188 379 and 622 on WikiOne with average improvement of 2953 and 4008 respectively For 5shot MetaR increases by 299 405 326 and 175 on MRR Hits10 Hits5 and Hits1 on NELLOne with average improvement of 3013 Thus for the first question we want to explore the results of MetaR are no worse than GMatching indicating that MetaR has the capability of accomplishing fewshot link prediction In parallel the impressive improvement compared with GMatching demonstrates that the key idea of MetaR transferring relationspecific meta information from support set to query set works well on fewshot link prediction task Furthermore compared with GMatching our MetaR is independent with background knowledge graphs We test MetaR on 1shot link prediction in partial NELLOne and WikiOne which discard the background graph and get the results of 0279 and 0348 on Hits10 respectively Such results are still comparable with GMatching in fully datasets with background We have proved that relationspecific meta information the key point of MetaR successfully contributes to fewshot link prediction in previous section As there are two kinds of relationspecific meta information in this paper relation meta and gradient meta we want to figure out how these two kinds of meta information contribute to the performance Thus we conduct an ablation study with three settings The first one is our complete MetaR method denoted as standard The second one is removing the gradient meta by transferring unupdated relation meta directly from support set to query set without updating it via gradient meta denoted as g The third one is removing the relation meta further which makes the model rebase to a simple TransE embedding model denoted as g r The result under the third setting is copied from BIBREF11 It uses the triples from background graph training tasks and oneshot training triples from validationtest set so its neither BGPreTrain nor BGInTrain We conduct the ablation study on NELLone with metric Hit10 and results are shown in Table 5 Table 5 shows that removing gradient meta decreases 293 and 15 on two dataset settings and further removing relation meta continuous decreases the performance with 55 and 72 compared to the standard results Thus both relation meta and gradient meta contribute significantly and relation meta contributes more than gradient meta Without gradient meta and relation meta there is no relationspecific meta information transferred in the model and it almost doesnt work This also illustrates that relationspecific meta information is important and effective for fewshot link prediction task We have proved that both relation meta and gradient meta surely contribute to fewshot link prediction But is there any requirement for MetaR to ensure the performance on fewshot link prediction We analyze this from two points based on the results one is the sparsity of entities and the other is the number of tasks in training set The sparsity of entities We notice that the best result of NELLOne and WikiOne appears in different dataset settings With NELLOne MetaR performs better on BGInTrain dataset setting while with WikiOne it performs better on BGPreTrain Performance difference between two dataset settings is more significant on WikiOne Most datasets for fewshot task are sparse and the same with NELLOne and WikiOne but the entity sparsity in these two datasets are still significantly different which is especially reflected in the proportion of entities that only appear in one triple in training set 828 and 371 in WikiOne and NELLOne respectively Entities only have one triple during training will make MetaR unable to learn good representations for them because entity embeddings heavily rely on triples related to them in MetaR Only based on one triple the learned entity embeddings will include a lot of bias Knowledge graph embedding method can learn better embeddings than MetaR for those oneshot entities because entity embeddings can be corrected by embeddings of relations that connect to it while they cant in MetaR This is why the best performance occurs in BGPretrain setting on WikiOne pretrain entity embeddings help MetaR overcome the lowquality on oneshot entities The number of tasks From the comparison of MetaRs performance between with and without background dataset setting on NELLOne we find that the number of tasks will affect MetaRs performance significantly With BGInTrain there are 321 tasks during training and MetaR achieves 0401 on Hits10 while without background knowledge there are 51 with 270 less and MetaR achieves 0279 This makes it reasonable that why MetaR achieves best performance on BGInTrain with NELLOne Even NELLOne has 371 oneshot entities adding background knowledge into dataset increases the number of training tasks significantly which complements the sparsity problem and contributes more to the task Thus we conclude that both the sparsity of entities and number of tasks will affect performance of MetaR Generally with more training tasks MetaR performs better and for extremely sparse dataset pretrain entity embeddings are preferred We propose a meta relational learning framework to do fewshot link prediction in KGs and we design our model to transfer relationspecific meta information from support set to query set Specifically using relation meta to transfer common and important information and using gradient meta to accelerate learning Compared to GMatching which is the only method in this task our method MetaR gets better performance and it is also independent with background knowledge graphs Based on experimental results we analyze that the performance of MetaR will be affected by the number of training tasks and sparsity of entities We may consider obtaining more valuable information about sparse entities in fewshot link prediction in KGs in the future We want to express gratitude to the anonymous reviewers for their hard work and kind comments which will further improve our work in the future This work is funded by NSFC 9184620461473260 national key research program YS2018YFB140004 and Alibaba CangJingGeKnowledge Engine Research Plan </s> |
<s>[INST] SimplerVoice: A Key Message&Visual Description Generator System for Illiteracy [/INST] Illiteracy has been one of the most serious pervasive problems all over the world According to the U S Department of Education the National Center for Education Statistics approximately 32 million adults in the United States are not able to read which is about 14 of the entire adult population BIBREF0 Additionally 44 of the 24 million students in the U S federally funded adult education programs are English as a second language ESL students and about 185000 of them are at the lowest ESL level beginning literacy BIBREF1 While lowliterate adults lack the ability to read and to understand text particularly the lowliterate ESL adult learners also face the dual challenge of developing basic literacy skills which includes decoding comprehending and producing print along with English proficiency represent different nationalities and cultural backgrounds BIBREF2 Hence illiteracy is shown as a significant barrier that results in a persons struggling in every aspect of his or her daily life activity While there have not been any solutions to completely solve the illiteracy problem recent developments of data science and artificial intelligence have brought a great opportunity to study how to support lowliterate people in their lives In this work we propose SimplerVoice a system that is able to generate key messages and visual description for illiteracy SimplerVoice could present easiertounderstand representations of complex objects to lowliterate adult users which helps them gain more confidence in navigating their own daily lives While the recent technology such as Google Goggles Amazons Flow etc proposed methods to parse the complex objects using image recognition augmented reality techniques into the objects names then to search for URLs of the objects information the main challenges of SimplerVoice are to generate and retrieve simple yet informative text and visual description for illiterate people This includes supporting adult basic education ABE and the English as a second language acquisition SLA training by performing natural language processing and information retrieval techniques such as automatically generating sensible texts wordsensedisambiguation and imagesensedisambiguation mechanism and retrieving the optimal visual components We propose the overall framework and demonstrate the system in a case study of grocery shopping where SimplerVoice generates key text and visual manual of how to use grocery products The system prototype are also provided and the empirical evaluation shows that SimplerVoice is able to provide users with simple text and visual components which adequately convey the products usage The organization of the paper is as follows First we have a quick review of previous works of texttoimage synthesis field in Section SECREF2 In Section SECREF3 we show our system design including 4 parts as Input Retrieval Object2Text Text2Visual and Output Display along with the challenges of each components and the proposed solution We report the empirical evaluation of the proposed methods using realworld datasets for a case study in Section SECREF4 Finally Section SECREF5 concludes this paper and states future work directions In the field of ABE and SLA researchers have conducted a number of studies to assist lowliterate learners in their efforts to acquire literacy and language skills by reading interventions and providing specific instructions through local education agencies community colleges and educational organizations BIBREF3 BIBREF1 In augmentative and alternative communication AAC study texttopicture systems were proposed in BIBREF4 BIBREF5 BIBREF4 used a lookup table to transliterate each word in a sentence into an icon which resulted in a sequence of icons Because the resulting icons sequence might be difficult to comprehend the authors in BIBREF5 introduced a system using a concatenative or collage approach to select and display the pictures corresponding to the text To generate images from text the authors in BIBREF6 proposed an approach to automatically generate a large number of images for specified object classes that downloads all contents from a Web search query then removes irrelevant components and reranks the remainder However the study did not work on actionobject interaction classes which might be needed to describe an object Another direction is to link the text to a database of pictographs BIBREF7 introduced a texttopictograph translation system that is used in an online platform for augmentative and alternative communication The texttopictograph was built and evaluated on email text messages Furthermore an extended study of this work was provided in BIBREF8 which improved the Dutch texttopictograph through word sense disambiguation Recently there have been studies that proposed to use deep generative adversarial networks to perform texttoimage synthesis BIBREF9 BIBREF10 However these techniques might still have the limitation of scalability or image resolution restriction In this section we describe the system design and workflow of SimplerVoice Figure FIGREF1 SimplerVoice has 4 main components input retrieval object2text text2visual and output display Figure FIGREF1 provides the overall structure of SimplerVoice system Given an object as the target SimplerVoice first retrieves the target input in either of 3 representations 1 objects title as text 2 objects shape as image or 3 other forms eg objects information from scanned barcode speech from users etc Based on the captured input the system then generates a query stringsequence of text which is the key message describing the objects usage Due to lowliterates lack of reading capability the generated text requires not only informativeness but also simplicity and clarity Therefore we propose to use the SVO querys canonical representation as below Subject Verbing with Object TypeCategory The intuition of this query representation is that the generated key message should be able to describe the action of a person using or interacting with the target object Moreover the simple SVO model has been proposed to use in other studies BIBREF11 BIBREF12 BIBREF13 since it is able to provide adequate semantics meaning The detail of generating the SVO query is provided in Section SECREF3 Once the query is constructed SimplerVoice converts the query text into visual forms There is a variety of visual formats to provide users photos icons pictographs etc These visual components can be obtained by different means such as using search engine mapping queryontology to a database of images However the key point is to choose the optimal display for illiteracy which is described in Section SECREF12 The result of SimplerVoice is provided further in Section SECREF4 This section discusses the process of generating key message from the objects input Based on the retrieved input we can easily obtain the objects title through searching in database or using search engine hence we assume that the input of object2text is the objects title The workflow of object2text is provided in Figure FIGREF4 SVO query is constructed by the 3 steps below In order to find the object type SimplerVoice first builds an ontologybased knowledge tree Then the system maps the object with a trees leaf node based on the objects title For instance given the objects title as Thomas Plain Mini Bagels SimplerVoice automatically defines that the object category is bagel Note that both the knowledge tree and the mapping between object and object category are obtained based on textbased searching crawling web or through semantic webs content Figure FIGREF6 shows an example of the subtree for object category bagel While the mapped leaf node is the O in our SVO model the parents nodes describe the more general object categories and the neighbors indicate other objects types which are similar to the input object All the input objects type the direct parents category and the neighbors are then put in the next step generating verbs V We propose to use 2 methods to generate the suitable verbs for the target object heuristicsbased and ngrams model In detail SimplerVoice has a set of rulebased heuristics for the objects For instance if the object belongs to a food drink category the verb is generated as eat drink Another example is the retrieved play verb if input object falls into toy category However due to the complexity of objects type heuristicsbased approach might not cover all the contexts of object As to solve this an ngrams model is applied to generate a set of verbs for the target object An ngram is a contiguous sequence of n items from a given speech or text string Ngrams model has been extensively used for various tasks in text mining and natural language processing field BIBREF14 BIBREF15 Here we use the Google Books ngrams database BIBREF16 BIBREF17 to generate a set of verbs corresponding to the input objects usage Given a noun ngrams model can provide a set of words that have the highest frequency of appearance followed by the noun in the database of Google Books For an example eaten toasted are etc are the words which are usually used with bagel To get the right verb form after retrieving the words from ngrams model SimplerVoice performs word stemming BIBREF18 on the ngrams output Wordsense disambiguation In the realworld case a word could have multiple meanings This fact may affects the process of retrieving the right verb set Indeed wordsense disambiguation has been a challenging problem in the field of nature language processing An example of the ambiguity is the object cookie The word cookie has 2 meanings one is a small flat sweet food made from flour and sugar context of biscuit another is a piece of information stored on your computer about Internet documents that you have looked at context of computing Each meaning results in different verb lists such as eat bake for biscuit cookie and use store for computing cookie In order to solve the ambiguity we propose to take advantage of the built ontology tree In detail SimplerVoice uses the joint verb set of 3 types of nouns the input object the parents and the neighbors as the 3 noun types are always in the same context of ontology Equation EQREF8 shows the wordsense disambiguation mechanism with INLINEFORM0 Object indicates the verb set of an object generated by heuristics and ngrams model DISPLAYFORM0 Lowinformative verbs In order to ensure the quality of generated verbs SimplerVoice maintains a list of restricted verbs that need to be filtered out There are a lot of general and lowinformative verbs generated by ngrams model such as be have use etc as these verbs are highly used in daily sentencesconversation The restricted verb list could help to ensure the right specificity aspect Hence we modify EQREF8 into EQREF9 The process of wordsense disambiguation and lowinformative verb filtering is provided in Figure FIGREF10 DISPLAYFORM0 The approach to generate the subject S is similar to the verb V SimplerVoice also uses heuristics and ngrams model to find the suitable actor S In regard to heuristics method we apply a rulebased method via the objects title and objects category to generate S since there are objects only used by a specific group of S For an example if the objects title contains the word woman women the S will be Woman of if the object belongs to the baby product category the S will be Baby Additionally ngrams model also generates pronouns that frequently appear with the noun O The pronouns output could help identify the right subject S eg she woman girl he man boy etc If there exists both she and he in the generated pronoun set the system picks either of them Once the SVO is generated Text2Visual provides users with visual components that convey the SVO text meanings One simple solution to perform Text2Visual is to utilize existing conventional Web search engines SimplerVoice retrieves top image results using SVO as the search query However there could be image sense ambiguity in displaying the result from search engine For instance if the object is Swiss Cheese user might not distinguish between Swiss Cheese and the general Cheese images To solve the image sense ambiguity issue the authors in BIBREF5 suggests to display multiple images to guide human perception onto the right target objects meaning Additionally since SimplerVoice is designed for illiteracy the system needs to display the optimal visual component suitable for lowliterate people In BIBREF19 the authors study the effectiveness of different types of audiovisual representations for illiterate computer users While there is no difference between dynamic and static imagery mixed result in different use cases handdrawn or cartoons are shown to be easier for lowliterate users to understand than photorealistic representations Therefore SimplerVoice also provides users with pictographs display along with images We use the Sclera database of pictographs BIBREF20 Each SVO word is mapped with a corresponding Sclera pictograph file The detail of how to perform the mapping is discussed in BIBREF7 Intuitively the process is described as first the system manually links a subset of words with pictographs filenames then if the manual link is missing the word is linked to the close synset using WordNet Figure FIGREF15 In this section we demonstrate the effectiveness of SimplerVoice system in a case study of grocery shopping The section organization is as follows first we describe the real dataset and setup that SimplerVoice uses second we provide the prototype system which is a built application for endusers finally we show the results of SimplerVoice along with users feedback In the case study of grocery products shopping we use a database of INLINEFORM0 products description crawled from multiple sources Each product description contains 4 fields UPC code products title ontology path of product category and URL link of the product Since it is recommended to utilize various devices of technology such as computers or smart phones in adult ESL literacy education BIBREF21 we build a mobile application of SimplerVoice for illiterate users The goal of SimplerVoice is to support users with key message simple visual components of how to use the products given the scanned barcode UPC code or products name retrieved from parsing products images taken by endusers phone cameras Section SECREF17 shows our SimplerVoice application description There are 2 means to retrieve the objects input through SimplerVoice application text filling or taking photos of barcode products labels Figure FIGREF18 SimplerVoice automatically reads the target grocery products name and proceeds to the next stage Based on the builtin ontology tree SimplerVoice then finds the objects category the parent and the neighboring nodes The next step is to generate the SVO message eg Table TABREF19 and visual description eg Figure FIGREF20 of products usage Figure FIGREF22 shows an example of the result of SimplerVoice system for product HEB Bakery Cookies by the Pound from a grocery store 1 the product description 2 key messages and 3 visual components The product description includes the products categories searched on the grocery stores website BIBREF22 the parent nodes and the neighbors similar products categories The SVO query or key message for HEB Bakery Cookies by the Pound is generated as Woman eating cookies Additionally we support users with language translation into Spanish for convenience and provides different levels of reading Each reading level has a different level of difficulty The higher the reading level is the more advanced the texts are The reason of breaking the texts into levels is to encourage lowliterate users learning how to read Next to the key messages are the images and pictographs To evaluate our system we compared SimplerVoice to the original product description package baseline 1 and the top images result from search engines of the same product baseline 2 Given a set of products we generated the key message visual description of each product using 3 approaches below An example of the 3 approaches is provided in Fig FIGREF23 Baseline 1 We captured and displayed the product package photos and the product title text as product description Baseline 2 The product description was retrieved by search engine using the product titles and then presented to the users as the top images result from Google and Bing We also provided the product title along with the images SimplerVoice We shown the generated key messages Tab TABREF19 and visual description including 2 components photorealistics images and pictographs Fig FIGREF20 from SimplerVoice system Intuitively baseline 1 shows how much information a user would receive from the products packages without prior knowledge of the products while baseline 2 might provide additional information by showing top images from search engines With the baseline 2 we attempt to measure whether merely adding relevant or similar products images would be sufficient to improve the endusers ability to comprehend the products intended use Moreover with SimplerVoice we test if our system could provide users with the proper visual components to help them understand the products usage based on the proposed techniques and measure the usefulness of SimplerVoices generated description We evaluated the effectiveness interpretability of 3 above approaches by conducting a controlled user study with 15 subjects who were Vietnamese native and did not speakcomprehend English A dataset of random 20 US products including products title UPC code and product package images were chosen to be displayed in the user study Note that the 15 participated subjects had not used the 20 products before and were also not familiar with the packaged products including the chosen 20 products hence they were illiterate in terms of comprehending English and in terms of having used any of the products although they might be literate in Vietnamese Each participated user was shown the product description generated from each approach and was asked to identify what the products were and how to use them The users responses were then recorded in Vietnamese and were assigned to a score if they matched the correct answer by 3 experts who were bilingual in English and Vietnamese In this study we used the mean opinion score MOS BIBREF23 BIBREF24 to measure the effectiveness how similar a response were comparing to the correct products usage The MOS score range is 15 1Bad 2Poor 3Fair 4Good 5Excellent with 1 means incorrect product usage interpretability the lowest level of effectiveness and 5 means correct product usage interpretability the highest effectiveness level The assigned scores corresponding to responses were aggregated over all participated subjects and over the 3 experts The result of the score is reported in the next section Result Table TABREF21 shows the MOS scores indicating the performance of 3 approaches The mean of MOS scores of baseline 1 is the lowest one 257 the standard deviation stdev is 117 the baseline 2 mean score is slightly higher than the baseline 1s 286 the stdev is 127 while SimplerVoice evaluation score is the highest one 482 the stdev is 035 which means the most effective approach to provide users with products usage Additionally a pairedsamples ttest was conducted to compare the MOS scores of users responses among all products using baseline 1 and SimplerVoice system There was a significant difference in the scores for baseline 1 Mean 257 Stdev 117 and SimplerVoice Mean 482 Stdev 035 t 818224 p 119747e07 These results show that there is a statistically significant difference in the MOS means between baseline 1 and SimplerVoice and that SimplerVoice performs more effectively than baseline 1 over different types of products Baseline 1 scores ranges from 1 to 425 over all products as some products are easily to guess based on product package images such as bagels pretzels soda etc while some products packages might cause confusion such as shoe dye wax cube vinegar etc For an example all participated users were able to recognize the Always Bagels Cinnamon Raisin Bagels product as a type of bread and its usage as eating using baseline 1 while the ScentSationals Wild Raspberry Fragrance Wax Cubes product were mostly incorrectly recognized as a type of candy for eating Baseline 2 scores range over all products is 1 47 The baseline 2 has higher score than baseline 1 since users were provided more information with the top result product images from search engine For instance given the Fiesta Cinnamon Sticks product most users responses were a type of pastries cannoli for eating based on baseline 1 Since baseline 2 provided more photos of cinnamon sticks without the packaging the users were able to recognize the products as cinnamon Moreover the score of baseline 2 is only slightly higher than baseline 1 because search engines mostly return similar images from product package hence might provide only little additional information to the participants SimplerVoice scores ranges from 375 to 5 which is higher than baseline 1 and baseline 2 SimplerVoice score has low standard deviation indicating the consistent effectiveness along different types of products While performing the user study we also notice that the culture differences is an important factor to the result For an example the product has lowest score is the Heinz Distilled White Vinegar since there were participated users who have never used vinegar before These participants are from the rural Northern Vietnam area where people might have not known the vinegar product In this work we introduce SimplerVoice a key message visual description generator system for illiteracy To our best knowledge SimplerVoice is the first system framework to combine multiple AI techniques particularly in the field of natural language processing and information retrieval to support lowliterate users including lowliterate ESL learners building confidence on their own lives and to encourage them to improve their reading skills Although awareness by itself does not solve the problem of illiteracy the system can be put in different contexts for education goals SimplerVoice might be a valuable tool for both educational systems and daily usage The SimplerVoice system was evaluated and shown to achieve higher performance score comparing to other approaches Moreover we also introduced the SimplerVoice mobile application and have the application used by participants in the Literacy Coalition of Central Texass SPARK program BIBREF25 We received positive endusers feedback for the prototype and plan to add more features for SimplerVoice One of the future work is to improve the input retrieval of the system so that SimplerVoice can automatically recognize the object through the objects shape Another direction is to extend the work in other different realworld use cases and demonstrate its effectiveness on those case studies This research was conducted under the auspices of the IBM Science for Social Good initiative The authors would like to thank Christian O Harris and Heng Luo for discussions </s> |
<s>[INST] Modelling Semantic Categories using Conceptual Neighborhood [/INST] Vector space embeddings are commonly used to represent entities in fields such as machine learning ML BIBREF0 natural language processing NLP BIBREF1 information retrieval IR BIBREF2 and cognitive science BIBREF3 An important point however is that such representations usually represent both individuals and categories as vectors BIBREF4 BIBREF5 BIBREF6 Note that in this paper we use the term category to denote natural groupings of individuals as it is used in cognitive science with individuals referring to the objects from the considered domain of discourse For example the individuals carrot and cucumber belong to the vegetable category We use the term entities as an umbrella term covering both individuals and categories Given that a category corresponds to a set of individuals ie its instances modelling them as possibly imprecise regions in the embedding space seems more natural than using vectors In fact it has been shown that the vector representations of individuals that belong to the same category are indeed often clustered together in learned vector space embeddings BIBREF7 BIBREF8 The view of categories being regions is also common in cognitive science BIBREF3 However learning region representations of categories is a challenging problem because we typically only have a handful of examples of individuals that belong to a given category One common assumption is that natural categories can be modelled using convex regions BIBREF3 which simplifies the estimation problem For instance based on this assumption BIBREF9 modelled categories using Gaussian distributions and showed that these distributions can be used for knowledge base completion Unfortunately this strategy still requires a relatively high number of training examples to be successful However when learning categories humans do not only rely on examples For instance there is evidence that when learning the meaning of nouns children rely on the default assumption that these nouns denote mutually exclusive categories BIBREF10 In this paper we will in particular take advantage of the fact that many natural categories are organized into socalled contrast sets BIBREF11 These are sets of closely related categories which exhaustively cover some subdomain and which are assumed to be mutually exclusive eg the set of all common color names the set lbrace textfruittextvegetablerbrace or the set lbrace textNLP textIR textMLrbrace Categories from the same contrast set often compete for coverage For instance we can think of the NLP domain as consisting of research topics that involve processing textual information which are not covered by the IR and ML domains Categories which compete for coverage in this way are known as conceptual neighbors BIBREF12 eg NLP and IR red and orange fruit and vegetable Note that the exact boundary between two conceptual neighbors may be vague eg tomato can be classified as fruit or as vegetable In this paper we propose a method for learning region representations of categories which takes advantage of conceptual neighborhood especially in scenarios where the number of available training examples is small The main idea is illustrated in Figure FIGREF2 which depicts a situation where we are given some examples of a target category C as well as some related categories N1N2N3N4 If we have to estimate a region from the examples of C alone the small elliptical region shown in red would be a reasonable choice More generally a standard approach would be to estimate a Gaussian distribution from the given examples However vector space embeddings typically have hundreds of dimensions while the number of known examples of the target category is often far lower eg 2 or 3 In such settings we will almost inevitably underestimate the coverage of the category However in the example from Figure FIGREF2 if we take into account the knowledge that N1N2N3N4 are conceptual neighbors of C the much larger shaded region becomes a more natural choice for representing C Indeed the fact that eg C and N1 are conceptual neighbors suggests that any point in between the examples of these categories needs to be contained either in the region representing C or the region representing N1 In the spirit of prototype approaches to categorization BIBREF13 without any further information it makes sense to assume that their boundary is more or less halfway in between the known examples The contribution of this paper is twofold First we propose a method for identifying conceptual neighbors from text corpora We essentially treat this problem as a standard text classification problem by relying on categories with large numbers of training examples to generate a suitable distant supervision signal Second we show that the predicted conceptual neighbors can effectively be used to learn better category representations In distributional semantics categories are frequently modelled as vectors For example BIBREF14 study the problem of deciding for a word pair ic whether i denotes an instance of the category c which they refer to as instantiation They treat this problem as a binary classification problem where eg the pair AAAI conference would be a positive example while conference AAAI and New York conference would be negative examples Different from our setting their aim is thus essentially to model the instantiation relation itself similar in spirit to how hypernymy has been modelled in NLP BIBREF15 BIBREF16 To predict instantiation they use a simple neural network model which takes as input the word vectors of the input pair ic They also experiment with an approach that instead models a given category as the average of the word vectors of its known instances and found that this led to better results A few authors have already considered the problem of learning region representations of categories Most closely related BIBREF17 model ontology concepts using Gaussian distributions In BIBREF18 DBLPconfecaiJameelS16 a model is presented which embeds Wikipedia entities such that entities which have the same WikiData type are characterized by some region within a lowdimensional subspace of the embedding Within the context of knowledge graph embedding several approaches have been proposed that essentially model semantic types as regions BIBREF19 BIBREF20 A few approaches have also been proposed for modelling word meaning using regions BIBREF21 BIBREF22 or Gaussian distributions BIBREF23 Along similar lines several authors have proposed approaches inspired by probabilistic topic modelling which model latent topics using Gaussians BIBREF24 or related distributions BIBREF25 On the other hand the notion of conceptual neighborhood has been covered in most detail in the field of spatial cognition starting with the influential work of BIBREF12 In computational linguistics moreover this representation framework aligns with lexical semantics traditions where word meaning is constructed in terms of semantic decomposition ie lexical items being minimally decomposed into structured forms or templates rather than sets of features BIBREF26 effectively mimicking a sort of conceptual neighbourhood In Pustejovskys generative lexicon a set of semantic devices is proposed such that they behave in semantics similarly as grammars do in syntax Specifically this framework considers the qualia structure of a lexical unit as a set of expressive semantic distinctions the most relevant for our purposes being the socalled formal role which is defined as that which distinguishes the object within a larger domain eg shape or color This semantic interplay between cognitive science and computational linguistics gave way to the term lexical coherence which has been used for contextualizing the meaning of words in terms of how they relate to their conceptual neighbors BIBREF27 or by providing expressive lexical semantic resources in the form of ontologies BIBREF28 Our aim is to introduce a model for learning regionbased category representations which can take advantage of knowledge about the conceptual neighborhood of that category Throughout the paper we focus in particular on modelling categories from the BabelNet taxonomy BIBREF29 although the proposed method can be applied to any resource which i organizes categories in a taxonomy and ii provides examples of individuals that belong to these categories Selecting BabelNet as our use case is a natural choice however given its large scale and the fact that it integrates many lexical and ontological resources As the possible conceptual neighbors of a given BabelNet category C we consider all its siblings in the taxonomy ie all categories C1Ck which share a direct parent with C To select which of these siblings are most likely to be conceptual neighbors we look at mentions of these categories in a text corpus As an illustrative example consider the pair hamletvillage and the following sentence In British geography a hamlet is considered smaller than a village and From this sentence we can derive that hamlet and village are disjoint but closely related categories thus suggesting that they are conceptual neighbors However training a classifier that can identify conceptual neighbors from such sentences is complicated by the fact that conceptual neighborhood is not covered in any existing lexical resource to the best of our knowledge which means that large sets of training examples are not readily available To address this lack of training data we rely on a distant supervision strategy The central insight is that for categories with a large number of known instances we can use the embeddings of these instances to check whether two categories are conceptual neighbors In particular our approach involves the following three steps Identify pairs of categories that are likely to be conceptual neighbors based on the vector representations of their known instances Use the pairs from Step 1 to train a classifier that can recognize sentences which indicate that two categories are conceptual neighbors Use the classifier from Step 2 to predict which pairs of BabelNet categories are conceptual neighbors and use these predictions to learn category representations Note that in Step 1 we can only consider BabelNet categories with a large number of instances while the end result in Step 3 is that we can predict conceptual neighborhood for categories with only few known instances We now discuss the three aforementioned steps one by one Our aim here is to generate distant supervision labels for pairs of categories indicating whether they are likely to be conceptual neighbors These labels will then be used in Section SECREF12 to train a classifier for predicting conceptual neighborhood from text Let A and B be siblings in the BabelNet taxonomy If enough examples of individuals belonging to these categories are provided in BabelNet we can use these instances to estimate highquality representations of A and B and thus estimate whether they are likely to be conceptual neighbors In particular we split the known instances of A into a training set IAtextit train and test set IAtextit test and similar for B We then train two types of classifiers The first classifier estimates a Gaussian distribution for each category using the training instances in IAtextit train and IBtextit train respectively This should provide us with a reasonable representation of A and B regardless of whether they are conceptual neighbors In the second approach we first learn a Gaussian distribution from the joint set of training examples IAtextit train cup IBtextit train and then train a logistic regression classifier to separate instances from A and B In particular note that in this way we directly impose the requirement that the regions modelling A and B are adjacent in the embedding space intuitively corresponding to two halves of a Gaussian distribution We can thus expect that the second approach should lead to better predictions than the first approach if A and B are conceptual neighbors and to worse predictions if they are not In particular we propose to use the relative performance of the two classifiers as the required distant supervision signal for predicting conceptual neighborhood We now describe the two classification models in more detail after which we explain how these models are used to generate the distant supervision labels Gaussian Classifier The first classifier follows the basic approach from BIBREF17 where Gaussian distributions were similarly used to model WikiData categories In particular we estimate the probability that an individual e with vector representation mathbf e is an instance of the category A as follows where lambda A is the prior probability of belonging to category A the likelihood fmathbf e A is modelled as a Gaussian distribution and fmathbf e will also be modelled as a Gaussian distribution Intuitively we think of the Gaussian f A as defining a soft region modelling the category A Given the highdimensional nature of typical vector space embeddings we use a mean field approximation Where d is the number of dimensions in the vector space embedding ei is the itextit th coordinate of mathbf e and fi A is a univariate Gaussian To estimate the parameters mu i and sigma i2 of this Gaussian we use a Bayesian approach with a flat prior where Geimu isigma i2 represents the Gaussian distribution with mean mu i and variance sigma i2 and NIchi 2 is the normal inversechi 2 distribution In other words instead of using a single estimate of the mean mu and variance sigma 2 we average over all plausible choices of these parameters The use of the normal inversechi 2 distribution for the prior on mu i and sigma i2 is a common choice which has the advantage that the above integral simplifies to a Studentt distribution In particular we have where we assume IAtextit train lbrace a1anrbrace aij denotes the itextit th coordinate of the vector embedding of aj overlinexi frac1nsum j1n aij and tn1 is the Student tdistribution with n1 degrees of freedom The probability fmathbf e is estimated in a similar way but using all BabelNet instances The prior lambda A is tuned based on a validation set Finally we classify e as a positive example if PAmathbf e 05 GLR Classifier We first train a Gaussian classifier as in Section UNKREF9 but now using the training instances of both A and B Let us denote the probability predicted by this classifier as PAcup B textbf e The intuition is that entities for which this probability is high should either be instances of A or of B provided that A and B are conceptual neighbors If on the other hand A and B are not conceptual neighbors relying on this assumption is likely to lead to errors ie there may be individuals whose representation is in between A and B which are not instances of either which is what we need for generating the distant supervision labels If PAcup B textbf e 05 we assume that e either belongs to A or to B To distinguish between these two cases we train a logistic regression classifier using the instances from IAtextit train as positive examples and those from IBtextit train as negative examples Putting everything together we thus classify e as a positive example for A if PAcup B textbf e05 and e is classified as a positive example by the logistic regression classifier Similarly we classfiy e as a positive example for B if PAcup B textbf e05 and e is classified as a negative example by the logistic regression classifier We will refer to this classification model as GLR Gaussian Logistic Regression To generate the distant supervision labels we consider a ternary classification problem for each pair of siblings A and B In particular the task is to decide for a given individual e whether it is an instance of A an instance of B or an instance of neither where only disjoint pairs A and B are considered For the Gaussian classifier we predict A iff PAmathbf e05 and PAmathbf e PBmathbf e For the GLR classifier we predict A if PAcup Bmathbf e 05 and the associated logistic regression classifier predicts A The condition for predicting B is analogous The test examples for this ternary classification problem consist of the elements from IAtextit test and IBtextit test as well as some negative examples ie individuals that are neither instances of A nor B To select these negative examples we first sample instances from categories that have the same parent as A and B choosing as many such negative examples as we have positive examples Second we also sample the same number of negative examples from randomly selected categories in the taxonomy Let F1AB be the F1 score achieved by the Gaussian classifier and F2AB the F1 score of the GLR classifier Our hypothesis is that F1AB ll F2AB suggests that A and B are conceptual neighbors while F1AB gg F2AB suggests that they are not This intuition is captured in the following score where we consider A and B to be conceptual neighbors if sABgg 05 We now consider the following problem given two BabelNet categories A and B predict whether they are likely to be conceptual neighbors based on the sentences from a text corpus in which they are both mentioned To train such a classifier we use the distant supervision labels from Section SECREF8 as training data Once this classifier has been trained we can then use it to predict conceptual neighborhood for categories for which only few instances are known To find sentences in which both A and B are mentioned we rely on a disambiguated text corpus in which mentions of BabelNet categories are explicitly tagged Such a disambiguated corpus can be automatically constructed using methods such as the one proposed by BIBREF30 mancinietal2017embedding for instance For each pair of candidate categories we thus retrieve all sentences where they cooccur Next we represent each extracted sentence as a vector To this end we considered two possible strategies Word embedding averaging We compute a sentence embedding by simply averaging the word embeddings of each word within the sentence Despite its simplicity this approach has been shown to provide competitive results BIBREF31 in line with more expensive and sophisticated methods eg based on LSTMs Contextualized word embeddings The recently proposed contextualized embeddings BIBREF32 BIBREF33 have already proven successful in a wide range of NLP tasks Instead of providing a single vector representation for all words irrespective of the context contextualized embeddings predict a representation for each word occurrence which depends on its context These representations are usually based on pretrained language models In our setting we extract the contextualized embeddings for the two candidate categories within the sentence To obtain this contextualized embedding we used the last layer of the pretrained language model which has been shown to be most suitable for capturing semantic information BIBREF34 BIBREF35 We then use the concatenation of these two contextualized embeddings as the representation of the sentence For both strategies we average their corresponding sentencelevel representations across all sentences in which the same two candidate categories are mentioned Finally we train an SVM classifier on the resulting vectors to predict for the pair of siblings AB whether sAB 05 holds Let C be a category and assume that N1Nk are conceptual neighbors of this category Then we can model C by generalizing the idea underpinning the GLR classifier In particular we first learn a Gaussian distribution from all the instances of C and N1Nk This Gaussian model allows us to estimate the probability PCcup N1cup cup Nk mathbf e that e belongs to one of CN1Nk If this probability is sufficiently high ie higher than 05 we use a multinomial logistic regression classifier to decide which of these categories e is most likely to belong to Geometrically we can think of the Gaussian model as capturing the relevant local domain while the multinomial logistic regression model carves up this local domain similar as in Figure FIGREF2 In practice we do not know with certainty which categories are conceptual neighbors of C Instead we select the k categories for some fixed constant k among all the siblings of C which are most likely to be conceptual neighbors according to the text classifier from Section SECREF12 The central problem we consider is category induction given some instances of a category predict which other individuals are likely to be instances of that category When enough instances are given standard approaches such as the Gaussian classifier from Section UNKREF9 or even a simple SVM classifier can perform well on this task For many categories however we only have access to a few instances either because the considered ontology is highly incomplete or because the considered category only has few actual instances The main research question which we want to analyze is whether predicted conceptual neighborhood can help to obtain better category induction models in such cases In Section SECREF16 we first provide more details about the experimental setting that we followed Section SECREF23 then discusses our main quantitative results Finally in Section SECREF26 we present a qualitative analysis As explained in Section SECREF3 we used BabelNet BIBREF29 as our reference taxonomy BabelNet is a largescale fullfledged taxonomy consisting of heterogeneous sources such as WordNet BIBREF36 Wikidata BIBREF37 and WiBi BIBREF38 making it suitable to test our hypothesis in a general setting Vector space embeddings Both the distant labelling method from Section SECREF8 and the category induction model itself need access to vector representations of the considered instances To this end we used the NASARI vectors which have been learned from Wikipedia and are already linked to BabelNet BIBREF1 BabelNet category selection To test our proposed category induction model we consider all BabelNet categories with fewer than 50 known instances This is motivated by the view that conceptual neighborhood is mostly useful in cases where the number of known instances is small For each of these categories we split the set of known instances into 90 for training and 10 for testing To tune the prior probability lambda A for these categories we hold out 10 from the training set as a validation set The conceptual neighbors among the considered test categories are predicted using the classifier from Section SECREF12 To obtain the distant supervision labels needed to train that classifier we consider all BabelNet categories with at least 50 instances This ensures that the distant supervision labels are sufficiently accurate and that there is no overlap with the categories which are used for evaluating the model Text classifier training As the text corpus to extract sentences for category pairs we used the English Wikipedia In particular we used the dump of November 2014 for which a disambiguated version is available online This disambiguated version was constructed using the shallow disambiguation algorithm of BIBREF30 mancinietal2017embedding As explained in Section SECREF12 for each pair of categories we extracted all the sentences where they cooccur including a maximum window size of 10 tokens between their occurrences and 10 tokens to the left and right of the first and second category within the sentence respectively For the averagingbased sentence representations we used the 300dimensional pretrained GloVe word embeddings BIBREF39 To obtain the contextualized representations we used the pretrained 768dimensional BERTbase model BIBREF33 The text classifier is trained on 3552 categories which cooccur at least once in the same sentence in the Wikipedia corpus using the corresponding scores sAB as the supervision signal see Section SECREF12 To inspect how well conceptual neighborhood can be predicted from text we performed a 10fold cross validation over the training data removing for this experiment the unclear cases ie those category pairs with sAB scores between 04 and 06 We also considered a simple baselineWE based on the number of cooccurring sentences for each pairs which we might expect to be a reasonably strong indicator of conceptual neighborhood ie the more often two categories are mentiond in the same sentence the more likely that they are conceptual neighbors The results for this crossvalidation experiment are summarized in Table TABREF22 Surprisingly perhaps the word vector averaging method seems more robust overall while being considerably faster than the method using BERT The results also confirm the intuition that the number of cooccurring sentences is positively correlated with conceptual neighborhood although the results for this baseline are clearly weaker than those for the proposed classifiers Baselines To put the performance of our model in perspective we consider three baseline methods for category induction First we consider the performance of the Gaussian classifier from Section UNKREF9 as a representative example of how well we can model each category when only considering their given instances this model will be referred to as Gauss Second we consider a variant of the proposed model in which we assume that all siblings of the category are conceptual neighbors this model will be referred to as Multi Third we consider a variant of our model in which the neighbors are selected based on similarity To this end we represent each BabelNet as their vector from the NASARI space From the set of siblings of the target category C we then select the k categories whose vector representation is most similar to that of C in terms of cosine similarity This baseline will be referred to as Similarityk with k the number of selected neighbors We refer to our model as SECONDWEAk or SECONDBERTk SEmantic categories with COnceptual NeighborhooD depending on whether the word embedding averaging strategy is used or the method using BERT Our main results for the category induction task are summarized in Table TABREF24 In this table we show results for different choices of the number of selected conceptual neighbors k ranging from 1 to 5 As can be seen from the table our approach substantially outperforms all baselines with Multi being the most competitive baseline Interestingly for the Similarity baseline the higher the number of neighbors the more the performance approaches that of Multi The relatively strong performance of Multi shows that using the siblings of a category in the BabelNet taxonomy is in general useful However as our results show better results can be obtained by focusing on the predicted conceptual neighbors only It is interesting to see that even selecting a single conceptual neighbor is already sufficient to substantially outperform the Gaussian model although the best results are obtained for k4 Comparing the WEA and BERT variants it is notable that BERT is more successful at selecting the single best conceptual neighbor reflected in an F1 score of 470 compared to 419 However for k ge 2 the results of the WEA and BERT are largely comparable To illustrate how conceptual neighborhood can improve classification results Fig FIGREF25 shows the two first principal components of the embeddings of the instances of three BabelNet categories Songbook Brochure and Guidebook All three categories can be considered to be conceptual neighbors Brochure and Guidebook are closely related categories and we may expect there to exist borderline cases between them This can be clearly seen in the figure where some instances are located almost exactly on the boundary between the two categories On the other hand Songbook is slightly more separated in the space Let us now consider the leftmost data point from the Songbook test set which is essentially an outlier being more similar to instances of Guidebook than typical Songbook instances When using a Gaussian model this data point would not be recognised as a plausible instance When incorporating the fact that Brochure and Guidebook are conceptual neighbors of Songbook however it is more likely to be classified correctly To illustrate the notion of conceptual neighborhood itself Table TABREF27 displays some selected category pairs from the training set ie the category pairs that were used to train the text classifier which intuitively correspond to conceptual neighbors The left column contains some selected examples of category pairs with a high sAB score of at least 09 As these examples illustrate we found that a high sAB score was indeed often predictive of conceptual neighborhood As the right column of this table illustrates there are several category pairs with a lower sAB score of around 05 which intuitively still seem to correspond to conceptual neighbors When looking at category pairs with even lower scores however conceptual neighborhood becomes rare Moreover while there are several pairs with high scores which are not actually conceptual neighbors eg the pair Actor Makup Artist they tend to be categories which are still closely related This means that the impact of incorrectly treating them as conceptual neighbors on the performance of our method is likely to be limited On the other hand when looking at category pairs with a very low confidence score we find many unrelated pairs which we can expect to be more harmful when considered as conceptual neighbors as the combined Gaussian will then cover a much larger part of the space Some examples of such pairs include Primary school Financial institution Movie theatre Housing estate Corporate title Pharaoh and Fraternity Headquarters Finally in Tables TABREF28 and TABREF29 we show examples of the top conceptual neighbors that were selected for some categories from the test set Table TABREF28 shows examples of BabelNet categories for which the F1 score of our SECONDWEA1 classifier was rather low As can be seen the conceptual neighbors that were chosen in these cases are not suitable For instance Bachelors degree is a nearsynonym of Undergraduate degree hence assuming them to be conceptual neighbors would clearly be detrimental In contrast when looking at the examples in Table TABREF29 where categories are shown with a higher F1 score we find examples of conceptual neighbors that are intuitively much more meaningful We have studied the role of conceptual neighborhood for modelling categories focusing especially on categories with a relatively small number of instances for which standard modelling approaches are challenging To this end we have first introduced a method for predicting conceptual neighborhood from text by taking advantage of BabelNet to implement a distant supervision strategy We then used the resulting classifier to identify the most likely conceptual neighbors of a given target category and empirically showed that incorporating these conceptual neighbors leads to a better performance in a category induction task In terms of future work it would be interesting to look at other types of lexical relations that can be predicted from text One possible strategy would be to predict conceptual betweenness where a category B is said to be between A and C if B has all the properties that A and C have in common BIBREF40 eg we can think of wine as being conceptually between beer and rum In particular if B is predicted to be conceptually between A and C then we would also expect the region modelling B to be between the regions modelling A and C Acknowledgments Jose CamachoCollados Luis EspinosaAnke and Steven Schockaert were funded by ERC Starting Grant 637277 Zied Bouraoui was supported by CNRS PEPS INS2I MODERN </s> |
<s>[INST] The Transference Architecture for Automatic Post-Editing [/INST] The performance of stateoftheart MT systems is not perfect thus human interventions are still required to correct machine translated texts into publishable quality translations BIBREF0 Automatic postediting APE is a method that aims to automatically correct errors made by MT systems before performing actual human postediting PE BIBREF1 thereby reducing the translators workload and increasing productivity BIBREF2 APE systems trained on human PE data serve as MT postprocessing modules to improve the overall performance APE can therefore be viewed as a 2ndstage MT system translating predictable error patterns in MT output to their corresponding corrections APE training data minimally involves MT output mt and the human postedited pe version of mt but additionally using the source src has been shown to provide further benefits BIBREF3 BIBREF4 BIBREF5 To provide awareness of errors in mt originating from src attention mechanisms BIBREF6 allow modeling of nonlocal dependencies in the input or output sequences and importantly also global dependencies between them in our case src mt and pe The transformer architecture BIBREF7 is built solely upon such attention mechanisms completely replacing recurrence and convolutions The transformer uses positional encoding to encode the input and output sequences and computes both self and crossattention through socalled multihead attentions which are facilitated by parallelization Such multihead attention allows to jointly attend to information at different positions from different representation subspaces eg utilizing and combining information from src mt and pe In this paper we present a multisource neural APE architecture called transference Our model contains a source encoder which encodes src information a second encoder encsrc rightarrow mt which takes the encoded representation from the source encoder encsrc combines this with the selfattentionbased encoding of mt encmt and prepares a representation for the decoder decpe via crossattention Our second encoder encsrc rightarrow mt can also be viewed as a standard transformer decoding block however without masking which acts as an encoder We thus recombine the different blocks of the transformer architecture and repurpose them for the APE task in a simple yet effective way The suggested architecture is inspired by the twostep approach professional translators tend to use during postediting first the source segment is compared to the corresponding translation suggestion similar to what our encsrc rightarrow mt is doing then corrections to the MT output are applied based on the encountered errors in the same way that our decpe uses the encoded representation of encsrc rightarrow mt to produce the final translation The paper makes the following contributions i we propose a new multiencoder model for APE that consists only of standard transformer encoding and decoding blocks ii by using a mix of self and crossattention we provide a representation of both src and mt for the decoder allowing it to better capture errors in mt originating from src this advances the stateoftheart in APE in terms of BLEU and TER and iii we analyze the effect of varying the number of encoder and decoder layers BIBREF8 indicating that the encoders contribute more than decoders in transformerbased neural APE Recent advances in APE research are directed towards neural APE which was first proposed by Pal2016ACL and junczysdowmuntgrundkiewicz2016WMT for the singlesource APE scenario which does not consider src ie mt rightarrow pe In their work junczysdowmuntgrundkiewicz2016WMT also generated a large synthetic training dataset through back translation which we also use as additional training data Exploiting source information as an additional input can help neural APE to disambiguate corrections applied at each time step this naturally leads to multisource APE lbrace src mtrbrace rightarrow pe A multisource neural APE system can be configured either by using a single encoder that encodes the concatenation of src and mt BIBREF9 or by using two separate encoders for src and mt and passing the concatenation of both encoders final states to the decoder BIBREF10 A few approaches to multisource neural APE were proposed in the WMT 2017 APE shared task Junczysdowmunt2017WMT combine both mt and src in a single neural architecture exploring different combinations of attention mechanisms including soft attention and hard monotonic attention ChatterjeeEtAl2017WMT2 built upon the twoencoder architecture of multisource models BIBREF10 by means of concatenating both weighted contexts of encoded src and mt Varisbojar2017WMT compared two multisource models one using a single encoder with concatenation of src and mt sentences and a second one using two characterlevel encoders for mt and src along with a characterlevel decoder Recently in the WMT 2018 APE shared task several adaptations of the transformer architecture have been presented for multisource APE palEtAl2018WMT proposed an APE model that uses three selfattentionbased encoders They introduce an additional joint encoder that attends over a combination of the two encoded sequences from mt and src tebbifakhrEtAl2018WMT the NMTsubtask winner of WMT 2018 wmt18nmtbest employ sequencelevel loss functions in order to avoid exposure bias during training and to be consistent with the automatic evaluation metrics shinlee2018WMT propose that each encoder has its own selfattention and feedforward layer to process each input separately On the decoder side they add two additional multihead attention layers one for src rightarrow mt and another for src rightarrow pe Thereafter another multihead attention between the output of those attention layers helps the decoder to capture common words in mt which should remain in pe The APE PBSMTsubtask winner of WMT 2018 wmt18smtbest BIBREF11 also presented another transformerbased multisource APE which uses two encoders and stacks an additional crossattention component for src rightarrow pe above the previous crossattention for mt rightarrow pe Comparing shinlee2018WMTs approach with the winner system there are only two differences in the architecture i the crossattention order of src rightarrow mt and src rightarrow pe in the decoder and ii wmt18smtbest additionally shares parameters between two encoders We propose a multisource transformer model called transference lbrace srcmtrbrace tr rightarrow pe Figure FIGREF1 which takes advantage of both the encodings of src and mt and attends over a combination of both sequences while generating the postedited sentence The second encoder encsrc rightarrow mt makes use of the first encoder encsrc and a subencoder encmt for considering src and mt Here the encsrc encoder and the decpe decoder are equivalent to the original transformer for neural MT Our encsrc rightarrow mt follows an architecture similar to the transformers decoder the difference being that no masked multihead selfattention is used to process mt One selfattended encoder for src mathbf s s1 s2 ldots sk returns a sequence of continuous representations encsrc and a second selfattended subencoder for mt mathbf m m1 m2 ldots ml returns another sequence of continuous representations encmt Selfattention at this point provides the advantage of aggregating information from all of the words including src and mt and successively generates a new representation per word informed by the entire src and mt context The internal encmt representation performs crossattention over encsrc and prepares a final representation encsrc rightarrow mt for the decoder decpe The decoder then generates the pe output in sequence mathbf p p1 p2 ldots pn one word at a time from left to right by attending to previously generated words as well as the final representations encsrc rightarrow mt generated by the encoder To summarize our multisource APE implementation extends VaswaniNIPS2017 by introducing an additional encoding block by which src and mt communicate with the decoder Our proposed approach differs from the WMT 2018 PBSMT winner system in several ways i we use the original transformers decoder without modifications ii one of our encoder blocks encsrc rightarrow mt is identical to the transformers decoder block but uses no masking in the selfattention layer thus having one selfattention layer and an additional crossattention for src rightarrow mt and iii in the decoder layer the crossattention is performed between the encoded representation from encsrc rightarrow mt and pe Our approach also differs from the WMT 2018 NMT winner system i wmt18nmtbest concatenates the encoded representation of two encoders and passes it as the key to the attention layer of the decoder and ii the system additionally employs sequencelevel loss functions based on maximum likelihood estimation and minimum risk training in order to avoid exposure bias during training The main intuition is that our encsrc rightarrow mt attends over the src and mt and informs the pe to better capture process and share information between srcmtpe which efficiently models error patterns and the corresponding corrections Our model performs better than past approaches as the experiment section will show We explore our approach on both APE subtasks of WMT 2018 where the 1ststage MT system to which APE is applied is either a phrasebased statistical machine translation PBSMT or a neural machine translation NMT model For the PBSMT task we compare against four baselines the raw SMT output provided by the 1ststage PBSMT system the bestperforming systems from WMT APE 2018 mathbf wmt18smtbest which are a single model and an ensemble model by junczysdowmuntgrundkiewicz2018WMT as well as a transformer trying to directly translate from src to pe Transformer mathbf src rightarrow pe thus performing translation instead of APE We evaluate the systems using BLEU BIBREF12 and TER BIBREF13 For the NMT task we consider two baselines the raw NMT output provided by the 1ststage NMT system and the bestperforming system from the WMT 2018 NMT APE task mathbf wmt18nmtbest BIBREF14 Apart from the multiencoder transference architecture described above lbrace srcmtrbrace tr rightarrow pe and ensembling of this architecture two simpler versions are also analyzed first a monolingual mathbf mt rightarrow pe APE model using only parallel mtpe data and therefore only a single encoder and second an identical singleencoder architecture however using the concatenated src and mt text as input mathbf lbrace srcmtrbrace rightarrow pe BIBREF9 For our experiments we use the EnglishGerman WMT 2016 BIBREF4 2017 BIBREF5 and 2018 BIBREF15 APE task data All these released APE datasets consist of EnglishGerman triplets containing source English text src from the IT domain the corresponding German translations mt from a 1ststage MT system and the corresponding humanpostedited version pe The sizes of the datasets train dev test in terms of number of sentences are 12000 1000 2000 11000 0 2000 and 13442 1000 1023 for the 2016 PBSMT the 2017 PBSMT and the 2018 NMT data respectively One should note that for WMT 2018 we carried out experiments only for the NMT subtask and ignored the data for the PBSMT task Since the WMT APE datasets are small in size we use artificial training data BIBREF16 containing 45M sentences as additional resources 4M of which are weakly similar to the WMT 2016 training data while 500K are very similar according to TER statistics For experimenting on the NMT data we additionally use the synthetic eScape APE corpus BIBREF17 consisting of sim 7M triples For cleaning this noisy eScape dataset containing many unrelated language words eg Chinese we perform the following two steps i we use the cleaning process described in tebbifakhrEtAl2018WMT and ii we use the Moses BIBREF18 corpus cleaning scripts with minimum and maximum number of tokens set to 1 and 100 respectively After cleaning we perform punctuation normalization and then use the Moses tokenizer BIBREF18 to tokenize the eScape corpus with noescape option Finally we apply truecasing The cleaned version of the eScape corpus contains sim 65M triplets To build models for the PBSMT tasks from 2016 and 2017 we first train a generic APE model using all the training data 4M 500K 12K 11K described in Section SECREF2 Afterwards we finetune the trained model using the 500K artificial and 23K 12K 11K real PE training data We use the WMT 2016 development data dev2016 containing 1000 triplets to validate the models during training To test our system performance we use the WMT 2016 and 2017 test data test2016 test2017 as two subexperiments each containing 2000 triplets src mt and pe We compare the performance of our system with the four different baseline systems described above raw MT wmt18smtbest single and ensemble as well as Transformer src rightarrow pe Additionally we check the performance of our model on the WMT 2018 NMT APE task where unlike in previous tasks the 1ststage MT system is provided by NMT for this we explore two experimental setups i we use the PBSMT tasks APE model as a generic model which is then finetuned to a subset 12k of the NMT data lbrace srcmtrbrace nmttr rightarrow pegeneric smt One should note that it has been argued that the inclusion of SMTspecific data could be harmful when training NMT APE models BIBREF11 ii we train a completely new generic model on the cleaned eScape data sim 65M along with a subset 12K of the original training data released for the NMT task lbrace srcmtrbrace nmttr rightarrow pegeneric nmt The aforementioned 12K NMT data are the first 12K of the overall 134K NMT data The remaining 14K are used as validation data The released development set dev2018 is used as test data for our experiment alongside the test2018 for which we could only obtain results for a few models by the WMT 2019 task organizers We also explore an additional finetuning step of lbrace srcmtrbrace nmttr rightarrow pegeneric nmt towards the 12K NMT data called lbrace srcmtrbrace nmttr rightarrow peft and a model averaging the 8 best checkpoints of lbrace srcmtrbrace nmttr rightarrow peft which we call lbrace srcmtrbrace nmttr rightarrow peftavg Last we analyze the importance of our second encoder encsrc rightarrow mt compared to the source encoder encsrc and the decoder decpe by reducing and expanding the amount of layers in the encoders and the decoder Our standard setup which we use for finetuning ensembling etc is fixed to 666 for NsrcNmtNpe cf Figure FIGREF1 where 6 is the value that was proposed by VaswaniNIPS2017 for the base model We investigate what happens in terms of APE performance if we change this setting to 664 and 646 To handle outofvocabulary words and reduce the vocabulary size instead of considering words we consider subword units BIBREF19 by using bytepair encoding BPE In the preprocessing step instead of learning an explicit mapping between BPEs in the src mt and pe we define BPE tokens by jointly processing all triplets Thus src mt and pe derive a single BPE vocabulary Since mt and pe belong to the same language German and src is a close language English they naturally share a good fraction of BPE tokens which reduces the vocabulary size to 28k We follow a similar hyperparameter setup for all reported systems All encoders for lbrace srcmtrbrace tr rightarrow pe and the decoder are composed of a stack of Nsrc Nmt Npe 6 identical layers followed by layer normalization The learning rate is varied throughout the training process and increasing for the first training steps warmupsteps 8000 and afterwards decreasing as described in BIBREF7 All remaining hyperparameters are set analogously to those of the transformers base model except that we do not perform checkpoint averaging At training time the batch size is set to 25K tokens with a maximum sentence length of 256 subwords After each epoch the training data is shuffled During decoding we perform beam search with a beam size of 4 We use shared embeddings between mt and pe in all our experiments The results of our four models singlesource mathbf mt rightarrow pe multisource single encoder mathbf lbrace src perbrace rightarrow pe transference mathbf lbrace srcmtrbrace smttr rightarrow pe and ensemble in comparison to the four baselines raw SMT mathbf wmt18smtbest BIBREF11 single and ensemble as well as Transformer mathbf src rightarrow pe are presented in Table TABREF5 for test2016 and test2017 Table TABREF9 reports the results obtained by our transference model mathbf lbrace srcmtrbrace nmttr rightarrow pe on the WMT 2018 NMT data for dev2018 which we use as a test set and test2018 compared to the baselines raw NMT and mathbf wmt18nmtbest The raw SMT output in Table TABREF5 is a strong blackbox PBSMT system ie 1ststage MT We report its performance observed with respect to the ground truth pe ie the postedited version of mt The original PBSMT system scores over 62 BLEU points and below 25 TER on test2016 and test2017 Using a Transformer src rightarrow pe we test if APE is really useful or if potential gains are only achieved due to the good performance of the transformer architecture While we cannot do a full training of the transformer on the data that the raw MT engine was trained on due to the unavailability of the data we use our PE datasets in an equivalent experimental setup as for all other models The results of this system Exp 12 in Table TABREF5 show that the performance is actually lower across both test sets 552943 absolute points in BLEU and 521772 absolute in TER compared to the raw SMT baseline We report four results from mathbf wmt18smtbest i wmt18smtbest single which is the core multiencoder implementation without ensembling but with checkpoint averaging ii wmt18smtbest x4 which is an ensemble of four identical single models trained with different random initializations The results of wmt18smtbest single and wmt18smtbest x4 Exp 13 and 14 reported in Table TABREF5 are from junczysdowmuntgrundkiewicz2018WMT Since their training procedure slightly differs from ours we also trained the wmt18smtbest system using exactly our experimental setup in order to make a fair comparison This yields the baselines iii wmt18smtgenericbest single Exp 15 which is similar to wmt18smtbest single however the training parameters and data are kept in line with our transference general model Exp 23 and iv wmt18smtftbest single Exp 16 which is also trained maintaining the equivalent experimental setup compared to the fine tuned version of the transference general model Exp 33 Compared to both raw SMT and Transformer src rightarrow pe we see strong improvements for this stateoftheart model with BLEU scores of at least 6814 and TER scores of at most 2098 across the PBSMT testsets wmt18smtbest however performs better in its original setup Exp 13 and 14 compared to our experimental setup Exp 15 and 16 The two transformer architectures mathbf mt rightarrow pe and mathbf lbrace srcmtrbrace rightarrow pe use only a single encoder Table TABREF5 shows that mathbf mt rightarrow pe Exp 21 provides better performance 442 absolute BLEU on test2017 compared to the original SMT while mathbf lbrace srcmtrbrace rightarrow pe Exp 22 provides further improvements by additionally using the src information mathbf lbrace srcmtrbrace rightarrow pe improves over mathbf mt rightarrow pe by 162135 absolute BLEU points on test2016test2017 After finetuning both single encoder transformers Exp 31 and 32 in Table TABREF5 show further improvements 087 and 031 absolute BLEU points respectively for test2017 and a similar improvement for test2016 In contrast to the two models above our transference architecture uses multiple encoders To fairly compare to wmt18smtbest we retrain the wmt18smtbest system with our experimental setup cf Exp 15 and 16 in Table TABREF5 wmt18smtgenericbest single is a generic model trained on all the training data which is afterwards finetuned with 500K artificial and 23K real PE data wmt18smtftbest single It is to be noted that in terms of performance the data processing method described in junczysdowmuntgrundkiewicz2018WMT reported in Exp 13 is better than ours Exp 16 The finetuned version of the lbrace srcmtrbrace smttr rightarrow pe model Exp 33 in Table TABREF5 outperforms wmt18smtbest single Exp 13 in BLEU on both test sets however the TER score for test2016 increases One should note that wmt18smtbest single follows the transformer base model which is an average of five checkpoints while our Exp 33 is not When ensembling the 4 best checkpoints of our lbrace srcmtrbrace smttr rightarrow pe model Exp 41 the result beats the wmt18smtbest x4 system which is an ensemble of four different randomly initialized wmt18smtbest single systems Our mathbf ensemblesmt x3 combines two lbrace srcmtrbrace smttr rightarrow pe Exp 23 models initialized with different random weights with the ensemble of the finetuned transference model Exp33smtens4ckptExp 41 This ensemble provides the best results for all datasets providing roughly 1 BLEU point and 05 TER when comparing against wmt18smtbest x4 The results on the WMT 2018 NMT datasets dev2018 and test2018 are presented in Table TABREF9 The raw NMT system serves as one baseline against which we compare the performance of the different models We evaluate the system hypotheses with respect to the ground truth pe ie the postedited version of mt The baseline original NMT system scores 7676 BLEU points and 1508 TER on dev2018 and 7473 BLEU points and 1684 TER on test2018 For the WMT 2018 NMT data we first test our lbrace srcmtrbrace nmttr rightarrow pegenericsmt model which is the model from Exp 33 finetuned towards NMT data as described in Section SECREF3 Table TABREF9 shows that our PBSMT APE model finetuned towards NMT Exp 7 can even slightly improve over the already very strong NMT system by about 03 BLEU and 01 TER although these improvements are not statistically significant The overall results improve when we train our model on eScape and NMT data instead of using the PBSMT model as a basis Our proposed generic transference model Exp 8 lbrace srcmtrbrace nmttr rightarrow pegenericnmt shows statistically significant improvements in terms of BLEU and TER compared to the baseline even before finetuning and further improvements after finetuning Exp 9 lbrace srcmtrbrace nmttr rightarrow peft Finally after averaging the 8 best checkpoints our lbrace srcmtrbrace nmttr rightarrow peftavg model Exp 10 also shows consistent improvements in comparison to the baseline and other experimental setups Overall our finetuned model averaging the 8 best checkpoints achieves 102 absolute BLEU points and 069 absolute TER improvements over the baseline on test2018 Table TABREF9 also shows the performance of our model compared to the winner system of WMT 2018 wmt18nmtbest for the NMT task BIBREF14 wmt18nmtbest scores 1478 in TER and 7774 in BLEU on the dev2018 and 1646 in TER and 7553 in BLEU on the test2018 In comparison to wmt18nmtbest our model Exp 10 achieves better scores in TER on both the dev2018 and test2018 however in terms of BLEU our model scores slightly lower for dev2018 while some improvements are achieved on test2018 The number of layers NsrcNmtNpe in all encoders and the decoder for these results is fixed to 666 In Exp 51 and 52 in Table TABREF5 we see the results of changing this setting to 664 and 646 This can be compared to the results of Exp 23 since no finetuning or ensembling was performed for these three experiments Exp 51 shows that decreasing the number of layers on the decoder side does not hurt the performance In fact in the case of test2016 we got some improvement while for test2017 the scores got slightly worse In contrast reducing the encsrc rightarrow mt encoder blocks depth Exp 52 does indeed reduce the performance for all four scores showing the importance of this second encoder In Table TABREF11 we analyze and compare the best performing SMT ensemblesmt x3 and NMT lbrace srcmtrbrace nmttr rightarrow peftavg model outputs with the original MT outputs on the WMT 2017 SMT APE test set and on the WMT 2018 NMT development set Improvements are measured in terms of number of words which need to be i inserted In ii deleted De iii substituted Su and iv shifted Sh as per TER BIBREF13 in order to turn the MT outputs into reference translations Our model provides promising results by significantly reducing the required number of edits 24 overall for PBSMT task and 36 for NMT task across all edit operations thereby leading to reduced postediting effort and hence improving human postediting productivity When comparing PBSMT to NMT we see that stronger improvements are achieved for PBSMT probably because the raw SMT is worse than the raw NMT For PBSMT similar results are achieved for In De and Sh while less gains are obtained in terms of Su For NMT In is improved most followed by Su De and last Sh For shifts in NMT the APE system even creates further errors instead of reducing them which is an issue we aim to prevent in the future The proposed transference architecture lbrace srcmtrbrace smttr rightarrow pe Exp 23 shows slightly worse results than wmt18smtbest single Exp 13 before finetuning and roughly similar results after finetuning Exp 33 After ensembling however our transference model Exp 42 shows consistent improvements when comparing against the best baseline ensemble wmt18smtbest x4 Exp 14 Due to the unavailability of the sentencelevel scores of wmt18smtbest x4 we could not test if the improvements roughly 1 BLEU 05 TER are statistically significant Interestingly our approach of taking the model optimized for PBSMT and finetuning it to the NMT task Exp 7 does not hurt the performance as was reported in the previous literature BIBREF11 In contrast some small albeit statistically insignificant improvements over the raw NMT baseline were achieved When we train the transference architecture directly for the NMT task Exp 8 we get slightly better and statistically significant improvements compared to raw NMT Finetuning this NMT model further towards the actual NMT data Exp 9 as well as performing checkpoint averaging using the 8 best checkpoints improves the results even further The reasons for the effectiveness of our approach can be summarized as follows 1 Our encsrc rightarrow mt contains two attention mechanisms one is selfattention and another is crossattention The selfattention layer is not masked here therefore the crossattention layer in encsrc rightarrow mt is informed by both previous and future timesteps from the selfattended representation of mt encmt and additionally from encsrc As a result each state representation of encsrc rightarrow mt is learned from the context of src and mt This might produce better representations for decpe which can access the combined context In contrast in wmt18smtbest the decpe accesses representations from src and mt independently first using the representation from mt and then using that of src 2 The positionwise feedforward layer in our encsrc rightarrow mt of the transference model requires processing information from two attention modules while in the case of wmt18smtbest the positionwise feedforward layer in decpe needs to process information from three attention modules which may increase the learning difficulty of the feedforward layer 3 Since pe is a postedited version of mt sharing the same language mt and pe are quite similar compared to src Therefore attending over a finetuned representation from mt along with src which is what we have done in this work might be a reason for the better results than those achieved by attending over src directly Evaluating the influence of the depth of our encoders and decoder show that while the decoder depth appears to have limited importance reducing the encoder depth indeed hurts performance which is in line with domhan2018much In this paper we presented a multiencoder transformerbased APE model that repurposes the standard transformer blocks in a simple and effective way for the APE task first our transference architecture uses a transformer encoder block for src followed by a decoder block without masking on mt that effectively acts as a second encoder combining src rightarrow mt and feeds this representation into a final decoder block generating pe The proposed model outperforms the bestperforming system of WMT 2018 on the test2016 test2017 dev2018 and test2018 data and provides a new stateoftheart in APE Taking a departure from traditional transformerbased encoders which perform selfattention only our second encoder also performs crossattention to produce representations for the decoder based on both src and mt We also show that the encoder plays a more pivotal role than the decoder in transformerbased APE which could also be the case for transformerbased generation tasks in general Our architecture is generic and can be used for any multisource task eg multisource translation or summarization etc </s> |
<s>[INST] Emerging Language Spaces Learned From Massively Multilingual Corpora [/INST] Our primary goal is to learn meaning representations of sentences and sentence fragments by looking at the distributional information that is available in parallel corpora of human translations The basic idea is to use translations into other languages as semantic mirrors of the original text assuming that they represent the same meaning but with different symbols wordings and linguistic structures For this we discard any meaning diversions that may happen in translation due to target audience adaptation or other processes that may influence the semantics of the translated texts We also assume that the material can be divided into meaningful and selfcontained units Bible verses in our case and focus on the global datadriven model that hopefully can cope with instances that violate our assumptions Our model is based on the intuition that the huge amount of variation and the crosslingual differences in language ambiguity make it possible to learn semantic distinctions purely from data The translations are thus used as a naturally occurring signal or crosslingual grounding that can be applied as a form of implicit supervision for the learning procedure mapping sentences to semantic representations that resolve languageinternal ambiguities With this approach we hope to take a step forward in one of the main goals in artificial intelligence namely the task of natural language understanding In this paper however we emphasise the use of such models in the discovery of linguistic properties and relationships between languages in particular Having that in mind the study may open new directions for collaborations between language technology and general linguistics But before coming back to this let us first look at related work and the general principles of distributional semantics with crosslingual grounding The use of translations for disambiguation has been explored in various studies Dyvik BIBREF0 proposes to use word translations to discover lexical semantic fields Carpuat et al BIBREF1 discuss the use of parallel corpora for word sense disambiguation van der Plas and Tiedemann BIBREF2 present work on the extraction of synonyms and Villada and Tiedemann BIBREF3 explore multilingual word alignments to identify idiomatic expressions The idea of crosslingual disambiguation is simple The following example illustrates the effect of disambiguation of idiomatic uses of put off through translation into German Using the general idea of the distributional hypothesis that you shall know a word by the company it keeps BIBREF4 we can now explore how crosslingual context can serve as the source of information that defines the semantics of given sentences As common in the field of distributional semantics we will apply semantic vector space models that describe the meaning of a word or text by mapping it onto a position a realvalued vector in some highdimensional Euclidean space Various models and algorithms have been proposed in the literature see eg BIBREF5 BIBREF6 and applied to a number of practical tasks Predictive models based on neural network classifiers and neural language models BIBREF7 BIBREF8 have superseded models that are purely based on cooccurrence counts see BIBREF9 for a comparison of common approaches Semantic vector spaces show even interesting algebraic properties that reflect semantic compositionality support vectorbased reasoning and can be mapped across languages BIBREF10 BIBREF11 Multilingual models have been proposed as well BIBREF12 BIBREF13 Neural language models are capable of integrating multiple languages BIBREF14 which makes it possible to discover relations between them based on the language space learned purely from the data Our framework will be neural machine translation NMT that applies an encoderdecoder architecture which runs sequentially through a string of input symbols for example words in a sentence to map the information to dense vector representations which will then be used to decode that information in another language Figure 1 illustrates the general principle with respect to the classical Vauquois triangle of machine translation BIBREF15 Translation models are precisely the kind of machinery that tries to transfer the meaning expressed in one language into another by analysing understanding the input and generating the output NMT tries to learn that mapping from data and thus learns to understand some source language in order to produce proper translations in a target language from given examples Our primary hypothesis is that we can increase the level of abstraction by including a larger diversity in the training data that pushes the model to improve compression of the growing variation and complexity of the task We will test this hypothesis by training multilingual models over hundreds or even almost a thousand languages to force the MT model to abstract over a large proportion of the Worlds linguistic diversity As a biproduct of multilingual models with shared parameters we will obtain a mapping of languages to a continuous vector space depicting relations between individual languages by means of geometric distances In this paper we present our initial findings when training such a model with over 900 languages from a collection of Bible translations and focus on the ability of the model to pick up genetic relations between languages when being forced to cover many languages in one single model In the following we will first present the basic architecture of the neural translation model together with the setup for training multilingual models After that we will discuss our experimental results before concluding the paper with some final comments and prospects for future work Neural machine translation typically applies an endtoend network architecture that includes one or several layers for encoding an input sentence into an internal dense realvalued vector representation and another layer for decoding that representation into the output of the target language Various variants of that model have been proposed in the recent literature BIBREF16 BIBREF17 with the same general idea of compressing a sentence into a representation that captures all necessary aspects of the input to enable proper translation in the decoder An important requirement is that the model needs to support variable lengths of input and output This is achieved using recurrent neural networks RNNs that naturally support sequences of arbitrary lengths A common architecture is illustrated in Figure 1 Discrete input symbols are mapped via numeric word representations embeddings E onto a hidden layer C of context vectors h in this case by a bidirectional RNN that reads the sequence in a forward and a reverse mode The encoding function is often modeled by special memory units and all model parameters are learned during training on example translations In the simplest case the final representation returned after running through the encoding layer is sent to the decoder which unrolls the information captured by that internal representation Note that the illustration in Figure 1 includes an important addition to the model a socalled attention mechanism Attention makes it possible to focus on particular regions from the encoded sentence when decoding BIBREF17 and with this the representation becomes much more flexible and dynamic and greatly improves the translation of sentences with variable lengths All parameters of the network are trained on large collections of human translations parallel corpora typically by some form of gradient descent iterative function optimisation that is backpropagated through the network The attractive property of such a model is the ability to learn representations that reflect semantic properties of the input language through the task of translation However one problem is that translation models can be lazy and avoid abstractions if the mapping between source and target language does not require any deep understanding This is where the idea of multilinguality comes into the picture If the learning algorithm is confronted with a large linguistic variety then it has to generalize and to forget about languagepairspecific shortcuts Covering substantial amounts of the worlds linguistic diversity as we propose pushes the limits of the approach and strong abstractions in C can be expected Figure 2 illustrates the intuition behind that idea Various multilingual extensions of NMT have already been proposed in the literature The authors of BIBREF18 BIBREF19 apply multitask learning to train models for multiple languages Zoph and Knight BIBREF20 propose a multisource model and BIBREF21 introduces a characterlevel encoder that is shared across several source languages In our setup we will follow the main idea proposed by Johnson et al BIBREF22 The authors of that paper suggest a simple addition by means of a language flag on the source language side see Figure 2 to indicate the target language that needs to be produced by the decoder This flag will be mapped on a dense vector representation and can be used to trigger the generation of the selected language The authors of the paper argue that the model enables transfer learning and supports the translation between languages that are not explicitly available in training This ability gives a hint of some kind of vectorbased interlingua which is precisely what we are looking for However the original paper only looks at a small number of languages and we will scale it up to a larger variation using significantly more languages to train on More details will be given in the following section Our question is whether we can use a standard NMT model with a much larger coverage of the linguistic diversity of the World in order to maximise the variation signalling semantic distinctions that can be picked up by the learning procedures Figure 3 illustrates our setup based on a model trained on over 900 languages from the multilingual Bible corpus BIBREF23 We trained the model in various batches and observed the development of the model in terms of translation quality on some small heldout data The heldout data refers to an unseen language pair SwedishPortuguese in our case in both directions We selected those languages in order to see the capabilities of the system to translate between rather distant languages for which a reasonable number of closely related languages are in the data collection to improve knowledge transfer The results demonstrate so far that the network indeed picks up the information about the language to be produced The decoder successfully switches to the selected language and produces relatively fluent Biblestyle text The adequacy of the translation however is rather limited and this is most probably due to the restricted capacity of the network with such a load of information to be covered Nevertheless it is exciting to see that such a diverse material can be used in one single model and that it learns to share parameters across all languages One of the most interesting effects that we can observe is the emerging language space that relates to the language flags in the data In Figure 4 we plot the language space using tSNE BIBREF24 for projecting to two dimensions coloured by language family for the ten language families groups with most members in our data set We can see that languages roughly cluster according to the family they belong to Note that this is purely learned from the data based on the objective to translate between all of them with a single model The training procedure learns to map closely related languages near to each other in order to increase knowledge transfer between them This development is very encouraging and demonstrates the ability of the neural network model to optimise parameter sharing to make most out of the models capacity An interesting question coming out of this study is whether such multilingual translation models can be used to learn linguistic properties of the languages involved Making it possible to measure the distance between individual languages in the emerging structures could be useful in datadriven language typology and other crosslinguistic studies The results so far do not reveal a lot of linguistically interesting relations besides the projection of languages onto a global continuous space with realvalues distances between them Nevertheless quantifying the distance is potentially valuable and provides a more finegrained relation than discrete relations coming from traditional family trees It is however still an open question what kind of properties are represented by the language embeddings and further studies are necessary to see whether specific linguistic features can be identified and isolated from the distributed representations There is a growing interest in interpretability of emerging structures and related work already demonstrates the ability of predicting typological features with similar language representations BIBREF25 Massively parallel data sets make it now possible to study specific typological structures with computational models for example tense and aspect as in BIBREF26 and we intend to follow up our initial investigations of NMTbased representations in future research along those lines We also plan to consider other domains than the one of religious texts but it is difficult to obtain the same coverage of the linguistic space with different material Unbalanced mixtures will be an option but difficult to train Resources like the Universal Declarations of Human Rights are an option but unfortunately very sparse Another direction is to explore the interlingual variations and language developments using for example the alternative translations that exist for some languages in the Bible corpus However even here the data is rather sparse and it remains to be seen how reliable any emerging pattern will be Crucial for the success will be a strong collaboration with scholars from the humanities which shows the important role of digital humanities as a field In this paper we present our experiments with highly multilingual translation models We trained neural MT models on Bible translations of over 900 languages in order to see whether the system is capable of sharing parameters across a large diverse sample of the Worlds languages Our motivation is to learn languageindependent meaning representations using translations as implicit semantic supervision and crosslingual grounding Our pilot study demonstrates that such a model can pick up the relationship between languages purely from the data and the translation objective We hypothesise that such a datadriven setup can be interesting for crosslinguistic studies and language typology In the future we would like to investigate the emerging language space in more detail also in connection with alternative network architectures and training procedures We believe that empirical methods like this one based on automatic representation learning will have significant impact on studies in linguistics providing an objective way of investigating properties and structures of human languages emerging from data and distributional patterns We would like to thank the anonymous reviewers for their valuable comments and suggestions as well as the Academy of Finland for the support of the research presented in the paper with project 314062 from the ICT 2023 call on Computation Machine Learning and Artificial Intelligence </s> |
<s>[INST] An Annotation Scheme of A Large-scale Multi-party Dialogues Dataset for Discourse Parsing and Machine Comprehension [/INST] There are more and more NLP scholars focusing on the research of multiparty dialogues such as multiparty dialogues discourse parsing and multiparty meeting summarization BIBREF0 BIBREF1 BIBREF2 BIBREF3 BIBREF4 BIBREF5 However the scale of the STAC dataset has limited the research of discourse parsing for multiparty dialogues On the other hand as we know there is no literature working on machine reading comprehension for multiparty dialogues Considering the connection between the relevance between machine reading comprehension and discourse parsing we annotate the dataset for two tasks for multiparty dialogues understanding Our dataset derives from the large scale multiparty dialogues dataset the Ubuntu Chat Corpus BIBREF6 For each dialogue in the corpus we annotate the discourse structure of the dialogue and propose three questions and find the answer span in the input dialogues To improve the difficulty of the task we annotate frac16 to frac13 unanswerable questions and their plausible answers from dialogues This is a real example from the Ubuntu dataset Example 1 1 mjg59 Someone should suggest to Mark that the best way to get people to love you is to hire people to work on reverseengineering closed drivers 2 jdub heh 3 daniels rightarrow mjg59 heh 4 daniels HELLO 5 daniels rightarrow mjg59 your job is to entertain me so I dont fall asleep at 2pm and totally destroy my migration to AEST 6 bdale rightarrow daniels see you next week 7 daniels rightarrow bdale oh auug right rock 8 daniels rightarrow bdale just drop me an email or call 61 403 505 896 9 bdale rightarrow daniels I arrive Tuesday morning your time depart Fri morning will be staying at the Duxton There are mainly two contributions to our corpus A first large scale multipart dialogues dataset for discourse parsing It is a challenging task to parse the discourse structure of multiparty dialogues Enough training data will be essential to develop more powerful models We firstly propose the task of machine reading comprehension for multiparty dialogues Different from existing machine comprehension tasks multiparty dialogues could be more difficult which needs a graphbased model for representing the dialogues and better understanding the discourse structure in the dialogue In this paper I will give a detailed description of our large scale dataset In section 2 I will introduce Ubuntu corpus In Section 3 and Section 4 I will introduce the annotation for discourse parsing and machine reading comprehension respectively In Section 5 I will briefly list some related literature Our dataset derives from the large scale multiparty dialogues dataset the Ubuntu Chat Corpus BIBREF6 The Ubuntu dataset is a large scale multiparty dialogues corpus There are several reasons to choose the Ubuntu dataset as our raw data for annotation First Ubuntu dataset is a large multiparty dataset Recently BIBREF1 used Ubuntu as their dataset for learning dialogues graph representation After some preprocessing there are 38K sessions and 175M utterances In each session there are 310 utterances and 27 interlocutors Second it is easy to annotate the Ubuntu dataset The Ubuntu dataset already contains Responseto relations that are discourse relations between different speakers utterances For annotating discourse dependencies in dialogues we only need to annotate relations between the same speakers utterances and the specific sense of discourse relation Third there are many papers doing experiments on the Ubuntu dataset and the dataset has been widely recognized The discourse dependency structure of each multiparty dialogue can be regarded as a graph To learn better graph representation of multiparty dialogues we adopt the dialogues with 815 utterances and 37 speakers To simplify the task we filter the dialogues with long sentences more than 20 words Finally we obtain 52053 dialogues and 460358 utterances This section will explain how to annotate discourse structure in multiparty dialogues The task of discourse parsing for multiparty dialogues aims to detect discourse relations among utterances The discourse structure of a multiparty dialogue is a directed acyclic graph DAG In the process of annotation of discourse parsing for multiparty dialogues there are two parts edges annotation between utterances and specific sense type of discourse relations The discourse structure of Example 1 is shown in Figure 1 There are four speakers and nine utterances in the sample dialogue The left part shows the speakers and their utterances and the right part shows the discourse dependency relation arcs The discourse structure can be seen as a discourse dependency graph We adopt the same sense hierarchy with the STAC dataset which contains sixteen discourse relations The edge between two utterances represents that there is the discourse dependency relations between these two utterances The direction of the edge represents the direction of discourse dependency In this subsection what we need to do is to confirm whether two utterances have discourse relation Like PDTB BIBREF7 we call two utterances as Arg1 and Arg2 respectively The front utterance is Arg1 and the back utterance is Arg2 For example there is a multiparty dialogue with 9 utterances in Example 1 utterances 19 respectively The utterance 3 depends on utterance 1 we can draw an edge from utterance 1 to utterance 3 Otherwise if utterance 1 depends on utterance 2 we can draw an edge from utterance 2 to utterance 1 In most cases the direction of discourse relations in multiparty dialogues is from the front to the back The biggest difference between discourse parsing for wellwritten document and dialogues is that discourse relations can exist on two nonadjacent utterances in dialogues When we annotate dialogues we should read dialogues from begin to the end For each utterance we should find its one parent node at least from all its previous utterances We assume that the discourse structure is a connected graph and no utterance is isolated When we find the discourse relation between two utterances we need continue to confirm the specific relation sense We adopt the same senses hierarchy with the STAC dataset There are sixteen discourse relations in the STAC All relations are listed as follows Comment Clarificationquestion Elaboration Acknowledgement Continuation Explanation Conditional Questionanswerpair Alternation QElab Result Background Narration Correction Parallel Contrast The task of reading comprehension for multiparty dialogues aims to be beneficial for understanding multiparty dialogues Different from existing machine reading comprehension tasks the input of this task is a multiparty dialogue and we should to answer some questions given the dialogue We propose three questions for eache dialogue and annotate the span of answers in the input dialogue As we know our dataset is the first corpus for multiparty dialogues reading comprehension We construct following questions and answers for the dialogue in Example 1 Q1 When does Bdale leave A1 Fri morning Q2 How to get people love Mark in Mjg59s opinion A2 Hire people to work on reverseengineering closed drivers On the other hand to improve the difficulty of the task we propose frac16 to frac13 unanswerable questions in our dataset We annotate unanswerable questions and their plausible answers PA Each plausible answer comes from the input dialogue but is not the answer for the plausible question Q1 Whis is the email of daniels PA 61 403 505 896 In this section I will introduce several existing multiparty dialogues datasets and explain why we need to annotated a new dataset There is an only corpus of discourse parsing on multiparty chat dialogues STAC BIBREF8 The corpus derives from online game The Settlers of Catan The game Settlers is a multiparty winlose game As mentioned above an example in STAC is shown in Figure 1 More details for STAC corpus are described in BIBREF8 The overview of the STAC is shown in Table 1 From Table 1 we can know that there are about more 10K EDUs and relations and most of EDUs are weakly connected Each EDU can be regarded as a message or sentence in the dialogues There are sixteen types of discourse dependency relations in STAC as shown in Section 32 Machine reading comprehension is a popular task which aims to help the machine better understand natural language There are several types of datasets for machine comprehension including extractive datasets BIBREF9 BIBREF10 answer sentence selection datasets BIBREF11 BIBREF12 and multiple choice datasets BIBREF13 BIBREF14 I will briefly introduce two datasets QuAC BIBREF15and CoQA BIBREF16 QuAC Question Answering in Context is a twoparty dialogues dataset for machine reading comprehension BIBREF15 The dataset for Question Answering in Context that contains 14K informationseeking QA dialogs 100K questions in total The dialogs involve two crowd workers 1 a student who poses a sequence of freeform questions to learn as much as possible about a hidden Wikipedia text and 2 a teacher who answers the questions by providing short excerpts from the text CoQA is a large dataset for building conversation question answering systems BIBREF16 We propose the scheme for annotating large scale multiparty chat dialogues for discourse parsing and machine comprehension The main goal of this project is to be beneficial for understanding multiparty dialogues Our corpus are based on the Ubuntu Chat Corpus For each multiparty dialogue we annotate discourse structure and questionanswer pairs for the dialogue As we know this would be the first largescale corpus for multiparty dialogues discourse parsing and we firstly propose the task for multiparty dialogues machine reading comprehension </s> |
<s>[INST] Unsupervised Learning of Style-sensitive Word Vectors [/INST] Analyzing and generating natural language texts requires the capturing of two important aspects of language what is said and how it is said In the literature much more attention has been paid to studies on what is said However recently capturing how it is said such as stylistic variations has also proven to be useful for natural language processing tasks such as classification analysis and generation BIBREF1 BIBREF2 BIBREF3 This paper studies the stylistic variations of words in the context of the representation learning of words The lack of subjective or objective definitions is a major difficulty in studying style BIBREF4 Previous attempts have been made to define a selected aspect of the notion of style eg politeness BIBREF5 BIBREF6 BIBREF7 BIBREF8 BIBREF9 BIBREF10 however it is not straightforward to create strict guidelines for identifying the stylistic profile of a given text The systematic evaluations of stylesensitive word representations and the learning of stylesensitive word representations in a supervised manner are hampered by this In addition there is another trend of research forward controlling stylesensitive utterance generation without defining the style dimensions BIBREF11 BIBREF12 however this line of research considers style to be something associated with a given specific character ie a persona and does not aim to capture the stylistic variation space The contributions of this paper are threefold 1 We propose a novel architecture that acquires stylesensitive word vectors Figure 1 in an unsupervised manner 2 We construct a novel dataset for style which consists of pairs of stylesensitive words with each pair scored according to its stylistic similarity 3 We demonstrate that our word vectors capture the stylistic similarity between two words successfully In addition our training script and dataset are available on httpsjqk09agithubiostylesensitivewordvectors The key idea is to extend the continuous bag of words CBOW BIBREF0 by distinguishing nearby contexts and wider contexts under the assumption that a style persists throughout every single utterance in a dialog We elaborate on it in this section Let wt denote the target word token in the corpora and mathcal Ut lbrace w1 dots wt1 wt wt1dots wvert mathcal Ut vert rbrace denote the utterance word sequence including wt Here wtd or wtd in mathcal Ut is a context word of wt eg wt1 is the context word next to wt where din mathbb N0 is the distance between the context words and the target word wt For each word token w bold face mboxvw and tildemboxvw denote the vector of w and the vector predicting the word w Let mathcal V denote the vocabulary First we give an overview of CBOW which is our baseline model CBOW predicts the target word wt given nearby context words in a window with width delta leftlbrace wtpm d in mathcal Ut mid 1le d le delta rightrbrace Eq 4 The set contains in total at most 2delta words including delta words to the left and delta words to the right of a target word Specifically we train the word vectors tildemboxvwt and mboxvc cin by maximizing the following prediction probability Pwt propto exp biggl tildemboxvwt cdot frac1vert vert rsum cin mboxvcbiggr |
text Eq 5 The CBOW captures both semantic and syntactic word similarity through the training using nearby context words We refer to this form of CBOW as CBOWnearctx Note that in the implementation of BIBREF13 the window width delta is sampled from a uniform distribution however in this work we fixed delta for simplicity Hereafter throughout our experiments we turn off the random resizing of delta CBOW is designed to learn the semantic and syntactic aspects of words from their nearby context BIBREF13 However an interesting problem is determining the location where the stylistic aspects of words can be captured To address this problem we start with the assumption that a style persists throughout each single utterance in a dialog that is the stylistic profile of a word in an utterance must be consistent with other words in the same utterance Based on this assumption we propose extending CBOW to use all the words in an utterance as context lbrace wtpm d in mathcal Ut mid 1le drbrace |
text Eq 7 instead of only the nearby words Namely we expand the context window from a fixed width to the entire utterance This training strategy is expected to lead to learned word vectors that are more sensitive to style rather than to other aspects We refer to this version as CBOWallctx To learn the stylistic aspect more exclusively we further extended the learning strategy First remember that using nearby context is effective for learning word vectors that capture semantic and syntactic similarities However this means that using the nearby context can lead the word vectors to capture some aspects other than style Therefore as the first extension we propose excluding the nearby context from all the context In other words we use the distant context words only setminus |
leftlbrace wtpm d in mathcal Ut mid delta d rightrbrace text Eq 9 We expect that training with this type of context will lead to word vectors containing the stylesensitive information only We refer to this method as CBOWdistctx As the second extension to distill off aspects other than style we use both nearby and all contexts and As Figure 2 shows both the vector mboxvw and tildemboxvw of each word win mathcal V are divided into two vectors mboxvw mboxxw oplus mboxyw |
tildemboxvw tildemboxxw oplus tildemboxyw |
text Eq 10 where oplus denotes vector concatenation Vectors mboxxw and tildemboxxw indicate the stylesensitive part of mboxvw and tildemboxvw respectively Vectors mboxyw and tildemboxyw indicate the syntacticsemanticsensitive part of mboxvw and tildemboxvw respectively For training when the context words are near the target word we update both the stylesensitive vectors mboxxw0 mboxxw1 and the syntacticsemanticsensitive vectors mboxxw2 mboxxw3 ie mboxxw4 mboxxw5 Conversely when the context words are far from the target word mboxxw6 we only update the stylesensitive vectors mboxxw7 mboxxw8 Formally the prediction probability is calculated as follows P1wt propto exp biggl tildemboxvwt cdot frac1vert vert rsum cin mboxvcbiggr |
text |
P2wt propto exp biggl tildemboxxwt cdot frac1vert vert rsum cin mboxxcbiggr |
text Eq 11 At the time of learning two prediction probabilities loss functions are alternately computed and the word vectors are updated We refer to this method using the twofold contexts separately as the CBOWsepctx We investigated which word vectors capture the stylistic syntactic and semantic similarities We collected Japanese fictional stories from the Web to construct the dataset The dataset contains approximately 30M utterances of fictional characters We separated the data into a 991 split for training and testing In Japanese the function words at the end of the sentence often exhibit style eg desuwa desuze therefore we used an existing lexicon of multiword functional expressions BIBREF14 Overall the vocabulary size vert mathcal V vert was 100K We chose the dimensions of both the stylesensitive and the syntacticsemanticsensitive vectors to be 300 and the dimensions of the baseline CBOWs were 300 The learning rate was adjusted individually for each part in lbrace mboxxw mboxyw tildemboxxw tildemboxywrbrace such that the product of the learning rate and the expectation of the number of updates was a fixed constant We ran the optimizer with its default settings from the implementation of BIBREF0 The training stopped after 10 epochs We fixed the nearby window width to delta 5 To verify that our models capture the stylistic similarity we evaluated our stylesensitive vector mboxxwt by comparing to other word vectors on a novel artificial task matching human stylistic similarity judgments For this evaluation we constructed a novel dataset with human judgments on the stylistic similarity between word pairs by performing the following two steps First we collected only stylesensitive words from the test corpus because some words are strongly associated with stylistic aspects BIBREF15 BIBREF16 and therefore annotating random words for stylistic similarity is inefficient We asked crowdsourced workers to select stylesensitive words in utterances Specifically for the crowdsourced task of picking stylesensitive words we provided workers with a wordsegmented utterance and asked them to pick words that they expected to be altered within different situational contexts eg characters moods purposes and the background cultures of the speaker and listener Then we randomly sampled 1000 word pairs from the selected words and asked 15 workers to rate each of the pairs on five scales from 2 The style of the pair is different to 2 The style of the pair is similar inspired by the syntacticsemantic similarity dataset BIBREF17 BIBREF18 Finally we picked only word pairs featuring clear worker agreement in which more than 10 annotators rated the pair with the same sign which consisted of random pairs of highly agreeing stylesensitive words Consequently we obtained 399 word pairs with similarity scores To our knowledge this is the first study that created an evaluation dataset to measure the lexical stylistic similarity In the task of selecting stylesensitive words the pairwise interannotator agreement was moderate Cohens kappa kappa is 051 In the rating task the pairwise interannotator agreement for two classes lbrace 2 1rbrace or lbrace 1 2rbrace was fair Cohens kappa kappa is 023 These statistics suggest that at least in Japanese native speakers share a sense of stylesensitivity of words and stylistic similarity between stylesensitive words We used this evaluation dataset to compute the Spearman rank correlation rho style between the cosine similarity scores between the learned word vectors cos mboxvw mboxvwprime and the human judgements Table 1 shows the results on its left side First our proposed model CBOWallctx outperformed the baseline CBOWnearctx Furthermore the mboxx of CBOWdistctx and CBOWsepctx demonstrated better correlations for stylistic similarity judgments rho style561 and 513 respectively Even though the mboxx of CBOWsepctx was trained with the same context window as CBOWallctx the stylesensitivity was boosted by introducing joint training with the near context CBOWdistctx which uses only the distant context slightly outperforms CBOWsepctx These results indicate the effectiveness of training using a wider context window We further investigated the properties of each model using the following criterion 1 the models ability to capture the syntactic aspect was assessed through a task predicting part of speech POS and 2 the models ability to capture the semantic aspect was assessed through a task calculating the correlation with human judgments for semantic similarity First we tested the ability to capture syntactic similarity of each model by checking whether the POS of each word was the same as the POS of a neighboring word in the vector space Specifically we calculated SyntaxAcc N defined as follows frac1vert mathcal V vert Nsum win mathcal Vsum wprime in mathcal Nw hspace40ptmathbb Imathrm POSw mathrm POSwprime |
text Eq 24 where mathbb Itextcondition 1 if the condition is true and mathbb Itextconditon 0 otherwise the function mathrm POSw returns the actual POS tag of the word w and mathcal Nw denotes the set of the N top similar words lbrace wprime rbrace to w wrt cos mboxvwmboxvwprime in each vector space Table 1 shows SyntaxAcc N with N 5 and 10 For both N the mboxy the syntacticsemantic part of CBOWnearctx CBOWallctx and CBOWsepctx achieved similarly good Interestingly even though the mboxx of CBOWsepctx used the same context as that of CBOWallctx the syntactic sensitivity of mboxx was suppressed We speculate that the syntactic sensitivity was distilled off by the other part of the CBOWsepctx vector ie mboxy learned using only the near context which captured more syntactic information In the next section we analyze CBOWsepctx for the different characteristics of mboxx and mboxy To test the models ability to capture the semantic similarity we also measured correlations with the Japanese Word Similarity Dataset JWSD BIBREF19 which consists of 4000 Japanese word pairs annotated with semantic similarity scores by human workers For each model we calculate and show the Spearman rank correlation score rho sem between the cosine similarity score cos mboxvw mboxvwprime and the human judgements on JWSD in Table 1 CBOWdistctx has the lowest score rho sem159 however surprisingly the stylistic vector mboxxwt has the highest score rho sem289 while both vectors have a high rho style This result indicates that the proposed stylistic vector mboxxwt captures not only the stylistic similarity but also the captures semantic similarity contrary to our expectations ideally we want the stylistic vector to capture only the stylistic similarity We speculate that this is because not only the style but also the topic is often consistent in single utterances For example UTF8ipxm Santa Clause and UTF8ipxm reindeer are topically relevant words and these words tend to appear in a single utterance Therefore stylistic vectors lbrace mboxxwrbrace using all the context words in an utterance also capture the topic relatedness In addition JWSD contains topicrelated word pairs and synonym pairs therefore the word vectors that capture the topic similarity have higher rho sem0 We will discuss this point in the next section Finally to further understand what types of features our CBOWsepctx model acquired we show some words with the four most similar words in Table 2 Here for English readers we also report a result for English The English result also shows an example of the performance of our model on another language The left side of Table 2 for stylistic vector mboxx shows the results We found that the Japanese word UTF8ipxm I classical is similar to UTF8ipxm be classical or words containing it the second row of Table 2 The result looks reasonable because words such as UTF8ipxm I classical and UTF8ipxm be classical are typically used by Japanese Samurai or Ninja We can see that the vectors captured the similarity of these words which are stylistically consistent across syntactic and semantic varieties Conversely the right side of the table for the syntacticsemantic vector mboxy shows that the word UTF8ipxm I classical is similar to the personal pronoun eg UTF8ipxm I male childish We further confirmed that 15 the top similar words are also personal pronouns even though they are not shown due to space limitations These results indicate that the proposed CBOWsepctx model jointly learns two different types of lexical similarities ie the stylistic and syntacticsemantic similarities in the different parts of the vectors However our stylistic vector also captured the topic similarity such as UTF8ipxm Santa Clause and UTF8ipxm reindeer the fourth row of Table 2 Therefore there is still room for improvement in capturing the stylistic similarity This paper presented the unsupervised learning of stylesensitive word vectors which extends CBOW by distinguishing nearby contexts and wider contexts We created a novel dataset for style where the stylistic similarity between word pairs was scored by human Our experiment demonstrated that our method leads word vectors to distinguish the stylistic aspect and other semantic or syntactic aspects In addition we also found that our training cannot help confusing some styles and topics A future direction will be to addressing the issue by further introducing another context such as a document or dialoglevel context windows where the topics are often consistent but the styles are not This work was supported by JSPS KAKENHI Grant Number 15H01702 We thank our anonymous reviewers for their helpful comments and suggestions </s> |
<s>[INST] Bayesian Sparsification of Recurrent Neural Networks [/INST] Recurrent neural networks RNNs are among the most powerful models for natural language processing speech recognition questionanswering systems and other problems with sequential data BIBREF2 BIBREF3 BIBREF4 BIBREF5 BIBREF6 For complex tasks such as machine translation BIBREF5 or speech recognition BIBREF3 modern RNN architectures incorporate a huge number of parameters To use these models on portable devices with limited memory for instance smartphones the model compression is desired High compression level may also lead to an acceleration of RNNs In addition compression regularizes RNNs and helps to avoid overfitting There are a lot of RNNs compression methods based on specific weight matrix representations BIBREF7 BIBREF8 or sparsification via pruning BIBREF9 In this paper we focus on RNNs compression via sparsification Most of the methods from this group are heuristic and require timeconsuming hyperparameters tuning Recently Molchanov et al dmolch proposed a principled method based on variational dropout for sparsification of fully connected and convolutional networks A probabilistic model was described in which parameters controlling sparsity are tuned automatically during neural network training This model called Sparse Variational Dropout Sparse VD leads to extremely sparse solutions without a significant quality drop However this technique was not previously investigated for RNNs In this paper we apply Sparse VD to recurrent neural networks To take into account the specifics of RNNs we rely on some insights underlined in the paper by Gal Ghahramani gal where they explain the proper way to use binary dropout in RNNs from the Bayesian point of view In the experiments we show that LSTMs with Sparse VD yield high sparsity level with just a slight drop in quality We achieved 995 sparsity level on sentiment analysis task and up to 876 in character level language modeling experiment Consider a neural network with weights omega modeling the dependency of the target variables ylbrace y1 dots yell rbrace on the corresponding input objects X lbrace x1 dots xell rbrace In a Bayesian neural network the weights omega are treated as random variables With the prior distribution pomega we search for the posterior distribution pomega X y that will help to find expected target value during inference In the case of neural networks true posterior is usually intractable but it can be approximated by some parametric distribution qlambda omega The quality of this approximation is measured by the KLdivergence KLqlambda omega pomega X y The optimal parameter lambda can be found by maximization of the variational lower bound wrt lambda mathcal Lsum i1ell mathbb Eqlambda omega log pyixi omega KLqlambda omega pomega Eq 2 The expected loglikelihood term in 2 is usually approximated by MonteCarlo sampling To make the MC estimation unbiased the weights are parametrized by a deterministic function omega glambda xi where xi is sampled from some nonparametric distribution the reparameterization trick BIBREF10 The KLdivergence term in 2 acts as a regularizer and is usually computed or approximated analytically Dropout BIBREF11 is a standard technique for regularization of neural networks It implies that inputs of each layer are multiplied by a randomly generated noise vector The elements of this vector are usually sampled from Bernoulli or Gaussian distribution with the parameters tuned using crossvalidation Kingma et al kingma interpreted Gaussian dropout from a Bayesian perspective that allowed to tune dropout rate automatically during model training Later this model was extended to sparsify fully connected and convolutional neural networks resulting in a model called Sparse Variational Dropout Sparse VD BIBREF0 Consider one dense layer of a feedforward neural network with an input of the size n an output of the size m and a weight matrix W Following Kingma et al kingma in Sparse VD the prior on the weights is a fully factorized loguniform distribution pwij propto frac1wij and the posterior is searched in the form of fully factorized normal distribution qwijmij alpha ij mathcal Nmij alpha ij m2ij Eq 4 Employment of such form of the posterior distribution is equivalent to putting multiplicative BIBREF12 or additive BIBREF0 normal noise on the weights in the following manner wij mij xi ij quad xi ijsim mathcal N1 alpha ij Eq 5 wij mij epsilon ij quad epsilon ijsim mathcal N0 sigma 2ij quad alpha ij fracsigma 2ijm2ij Eq 6 The representation 6 is called additive reparameterization BIBREF0 It reduces the variance of the gradients of mathcal L w r t mij Moreover since a sum of normal distributions is a normal distribution with computable parameters the noise may be applied to the preactivation input vector times weight matrix W instead of W This trick is called the local reparameterization trick BIBREF13 BIBREF12 and it reduces the variance of the gradients even further and makes training more efficient In Sparse VD optimization of the variational lower bound 2 is performed w r t lbrace M log sigma rbrace The KLdivergence factorizes over the weights and its terms depend only on alpha ij because of the specific choice of the prior BIBREF12 KLqwijmij alpha ijpwijkalpha ij Eq 7 Each term can be approximated as follows BIBREF0 beginarrayckalpha approx 064 sigma 187 149log alpha |
05 log 1 alpha 1 C |
endarray Eq 8 KLdivergence term encourages large values of alpha ij If alpha ij rightarrow infty for a weight wij the posterior over this weight is a highvariance normal distribution and it is beneficial for model to put mij 0 as well as sigma ijalpha ij m2ij0 to avoid inaccurate predictions As a result the posterior over wij approaches zerocentered delta function the weight does not affect the networks output and can be ignored Yet another Bayesian model was proposed by Gal Ghahramani bindrop to explain the binary dropout On this base a recipe how to apply a binary dropout to the RNNs properly was proposed by Gal Ghahramani gal The recurrent neural network takes a sequence x x0 dots xT xtin mathbb Rn as an input and maps it into the sequence of hidden states beginarraycht fhxt ht1 ghxt Wx ht1 Wh b1 |
hi in mathbb Rm h0 bar0 |
endarray Eq 10 Throughout the paper we assume that the output of the RNN depends only on the last hidden state y fyhT gyhT Wy b2 Eq 11 Here gh and gy are some nonlinear functions However all the techniques we discuss further can be easily applied to the more complex setting e g language model with several outputs for one input sequence one output for each time step Gal Ghahramani gal considered RNNs as Bayesian networks The prior on the recurrent layer weights omega lbrace Wx Whrbrace is a fully factorized standard normal distribution The posterior is factorized over the rows of weights and each factor is searched as a mixture of two normal distributions |
qwxkmxk px mathcal N0 sigma 2 I 1px mathcal Nmxk sigma 2 Iquad |
qwhjmhj ph mathcal N0 sigma 2 I 1ph mathcal Nmhj sigma 2 I Eq 12 Under assumption sigma approx 0 sampling the row of weights from such posterior means putting all the weights from this row either to 0 drop the corresponding input neuron or to some learned values Thus this model is a probabilistic analog of binary dropout with dropout rates px and ph After unfolding the recurrence in the network the maximization of the variational lower bound for such model looks as follows sum i1ell int qomega M log Bigl yibig fybigl fhxiT fhdots fh xi1 hi0bigr Bigr d omega |
KLBigl qomega Mbig Vert pomega Bigr rightarrow max M Eq 13 Each integral in the first part of 13 is estimated with MC integration with a single sample hatomega i sim qomega M To make this estimation unbiased a the weights sample hatomega i should remain the same for all time steps toverline1 T for a fixed object b dropout rates px and ph should be fixed because the distribution we are sampling from depends on them The KLdivergence term from 13 is approximately equivalent to L2 regularization of the variational parameters M Finally this probabilistic model leads to the following dropout application in RNNs we sample a binary mask for the input and hidden neurons one mask per object for all moments of time and optimize the L2 regularized loglikelihood with the dropout rates and the weight of L2 regularization chosen using crossvalidation Also the same dropout technique may be applied to forward connections in RNNs for example in embedding and dense layers BIBREF1 The same technique can be applied to more complex architectures like LSTM in which the information flow between input and hidden units is controlled by the gate elements i sigmht1Whi xt Wxi quad o sigmht1Who xt Wxo Eq 14 f sigmht1Whf xt Wxf quad |
g tanhht1 Whg xt Wxg Eq 15 Here binary dropout masks for input and hidden neurons are generated 4 times individually for each of the gates iof and input modulation g Dropout for RNNs proposed by Gal Ghahramani gal helps to avoid overfitting but is very sensitive to the choice of the dropout rates On the other hand Sparse VD allows automatic tuning of the Gaussian dropout parameters individually for each weight which results in the model sparsification We combine these two techniques to sparsify and regularize RNNs Following Molchanov et al dmolch we use the fully factorized loguniform prior and approximate the posterior with a fully factorized normal distribution over the weights omega lbrace Wx Whrbrace beginarraycqwxkimxki sigma xki mathcal Nbigl mxki sigma xki2bigr |
qwhjimhji sigma hji mathcal Nbigl mhji sigma hji2bigr endarray Eq 17 where sigma xki and sigma hji have the same meaning as in additive reparameterization 6 To train the model we maximize the variational lower bound approximation sum i1ell int qomega M sigma log Bigl yibig fybigl fhxiT fhdots fh xi1 hi0bigr Bigr d omega |
sum ki1nm kbiggl fracsigma xki2mxki2biggr sum ji1mm kbiggl fracsigma hji2mhji2biggr Eq 18 w r t lbrace M log sigma rbrace using stochastic minibatch methods Here the recurrence in the expected loglikelihood term is unfolded as in 13 and the KL is approximated using 8 The integral in 18 is estimated with a single sample hatomega i sim qomega M alpha per input sequence We use the reparameterization trick for unbiased integral estimation and additive reparameterization for gradients variance reduction to sample both inputtohidden and hiddentohidden weight matrices widehatWx widehatWh To reduce the variance of the gradients and for more computational efficiency we also apply the local reparameterization trick to inputtohidden matrix widehatWx moving the noise from the weights to the preactivations beginarraycxt widehatWxj sum k1n xtk mxkj |
epsilon j sqrtsum k1n x2tk sigma xkj2 |
epsilon j sim mathcal N0 1 |
endarray Eq 19 As a result only 2dimensional noise on inputtohidden connections is required for each minibatch we generate one noise vector of length m for each object in a minibatch The local reparameterization trick cannot be applied to the hiddentohidden matrix Wh We use the same sample widehatWh for all moments of time therefore in the multiplication ht1 widehatWh the vector ht1 depends on widehatWh and the rule about the sum of normally distributed random variables cannot be applied Since usage of 3dimensional noise 2 dimensions of widehatWh and a minibatch size is too resourceconsuming we sample one noise matrix for all objects in a minibatch for efficiency hatwhjimhjisigma jjiepsilon hjiquad epsilon hji sim mathcal N0 1 Eq 20 The final framework works as follows we sample Gaussian additive noise on the inputtohidden preactivations one per input sequence and hiddentohidden weight matrix one per minibatch optimize the variational lower bound 18 w r t lbrace M log sigma rbrace and for many weights we obtain the posterior in the form of a zerocentered delta function because the KLdivergence encourages sparsity These weights can then be safely removed from the model In LSTM the same priorposterior pair is consisered for all inputtohidden and hiddentohidden matrices and all computations stay the same The noise matrices for inputtohidden and hiddentohidden connections are generated individually for each of the gates iof and input modulation g We perform experiments with LSTM as the most popular recurrent architecture nowadays We use Theano BIBREF14 and Lasagne BIBREF15 for implementation The source code will be available soon at httpsgithubcomtipt0pSparseBayesianRNN We demonstrate the effectiveness of our approach on two diverse problems Character Level Language Modeling and Sentiment Analysis Our results show that Sparse Variational Dropout leads to a high level of sparsity in recurrent models without a significant quality drop We use the dropout technique of Gal Ghahramani gal as a baseline because it is the most similar dropout technique to our approach and denote it VBD variational binary dropout According to Molchanov et al dmolch training neural networks with Sparse Variational Dropout from a random initialization is troublesome as a lot of weights may become pruned away before they could possibly learn something useful from the data We observe the same effect in our experiments with LSTMs especially with more complex models LSTM trained from a random initialization may have high sparsity level but also have a noticeable quality drop To overcome this issue we start from pretrained models that we obtain by training networks without Sparse Variational Dropout for several epochs Weights in models with Sparse Variational Dropout cannot converge exactly to zero because of the stochastic nature of the training procedure To obtain sparse networks we explicitly put weights with high corresponding dropout rates to 0 during testing as in Molchanov et al dmolch We use the value log alpha 3 as a threshold For all weights that we sparsify using Sparse Variational Dropout we initialize log sigma 2 with 6 We optimize our networks using Adam BIBREF16 Networks without any dropout overfit for both our tasks therefore we present results for them with early stopping Throughout experiments we use the mean values of the weights to evaluate the model quality we do not sample weights from posterior on the evaluating phase This is a common practice when working with dropout Following Gal Ghahramani gal we evaluated our approach on the sentiment analysis regression task The dataset is constructed based on Cornell film reviews corpus collected by Pang Lee regrdata It consists of approximately 10 thousands nonoverlapping segments of 200 words from the reviews The task is to predict corresponding film scores from 0 to 1 We use the provided train and test partitions We use networks with one embedding layer of 128 units one LSTM layer of 128 hidden units and finally a fully connected layer applied to the last output of the LSTM resulting in a scalar output All weights are initialized in the same way as in Gal Ghahramani gal We train our networks using batches of size 128 and a learning rate of 0001 for 1000 epochs We also clip the gradients with threshold 01 For all layers with VBD we use dropout rate 03 and weight decay 103 these parameters are chosen using cross validation As a baseline we train the network without any dropout and with VBD on all layers In this experiment our goal is to check the applicability of Sparse VD for recurrent networks therefore we apply it only to LSTM layer For embedding and dense layers we use VBD We try both start training of the network with Sparse VD from random initialization and from two different pretrained models The first pretrained model is obtained after 4 epochs of training of the network without any dropout The second one is obtained after 200 epochs of training of the network with VBD on all layers We choose number of pretraining epochs using models quality on crossvalidation The results are shown in Table 1 In this task our approach achieves extremely high sparsity level both from random initialization and from pretrained models Sparse VD networks trained from pretrained models achieve even better quality than baselines Note that models already have this sparsity level after approximately 20 epochs Following Mikolov et al mikolov11 we use the Penn Treebank Corpus to train our Language Model LM The dataset contains approximately 6 million characters and a vocabulary of 50 characters We use the provided train validation and test partitions We use networks with one LSTM layer of 1000 hidden units to solve the character level LM task All weight matrices of the networks are initialized orthogonally and all biases are initialized with zeros Initial values of hidden and cell elements are trainable and also initialized with zeros We train our networks on nonoverlapping sequences of 100 characters in batches of 64 using a learning rate of 0002 for 50 epochs and clip gradients with threshold 1 For all layers with VBD we use dropout rate 025 and do not use weight decay these parameters are chosen using quality of VDB model on validation set As a baseline we train the network without any dropout and with VBD only on recurrent weights hiddentohidden Semeniuta et al semeniuta16 showed that for this particular task applying dropout for feedforward connections additionally to VBD on recurrent ones does not improve the network quality We observe the same effect in our experiments In this experiment we try to sparsify both LSTM and dense layers therefore we apply Sparse VD for all layers We try both start training of the network with Sparse VD from random initialization and from two different pretrained models The first pretrained model is obtained after 11 epochs of training of the network without any dropout The second one is obtained after 50 epochs of training of the network with VBD on recurrent connections We choose the number of pretraining epochs using models quality on validation set The results are shown in Table 2 Here we do not achieve such extreme sparsity level as in the previous experiment This effect may be a consequence of the higher complexity of the task Also in LM problem we have several outputs for one input sequence one output for each time step instead of one output in Sentiment regression As a result the loglikelihood part of the loss function is much stronger for LM task and regularizer can not sparsify the network so effectively Here we see that the balance between the likelihood and the regularizer varies a lot for different tasks with RNNs and should be explored futher Fig 1 and 2 show the progress of test quality and network sparsity level through the training process Sparse VD network trained from random initialization underfits and therefore has a slight quality drop in comparison to baseline network without regularization Sparse VD networks trained from pretrained models achieve much higher quality but have lower sparsity levels than the one trained from random initialization Better pretrained models are harder to sparsify The quality of the model pretrained with VBD drops on the first epoches while the sparsity grows and the model does not fully recover later Deep neural networks often suffer from overfitting and different regularization techniques are used to improve their generalization ability Dropout BIBREF11 is a popular method of neural networks regularization The first successful implementations of this method for RNNs BIBREF17 BIBREF18 applied dropout only for feedforward connections and not recurrent ones Introducing dropout in recurrent connections may lead to a better regularization technique but its straightforward implementation may results in underfitting and memory loss through time BIBREF19 Several ways of dropout application for recurrent connections in LSTM were proposed recently BIBREF20 BIBREF1 BIBREF19 These methods inject binary noise into different parts of LSTM units Semeniuta et al semeniuta16 shows that proper implementation of dropout for recurrent connections is important not only for effective regularization but also to avoid vanishing gradients Bayer et al bayer13 successfully applied fast dropout BIBREF13 a deterministic approximation of dropout to RNNs Krueger et al zoneout introduced zoneout which forces some hidden units to maintain their previous values like in feedforward stochastic depth networks BIBREF21 Reducing RNN size is an important and rapidly developing area of research One possible concept is to represent large RNN weight matrix by some approximation of the smaller size For example Tjandra et al tjandra use Tensor Train decomposition of the weight matrices and Le et al kroneker approximate this matrix with Kronecker product Hubara et al itay limit the weights and activations to binary values proposing a way how to compute gradients w r t them Another concept is to start with a large network and to reduce its size during or after training The most popular approach here is pruning the weights of the RNN are cut off on some threshold Narang et al pruning choose threshold using several hyperparameters that control the frequency the rate and the duration of the weights eliminating When applying Sparse VD to RNNs we rely on the dropout for RNNs proposed by Gal Ghahramani gal The reason is that this dropout technique for RNNs is the closest one to Sparse VD approach However there are several other dropout methods for recurrent networks that outperform this baseline BIBREF19 BIBREF22 Comparison with them is our future work Combining Sparse VD with these latest dropout recipes is also an interesting research direction The challenge here is that the noise should be put on the neurons or gates instead of the weights as in our model However there are several recent papers BIBREF23 BIBREF24 where group sparsity methods are proposed for fully connected and convolutional networks These methods can be used to solve the underlined problem The comparison of our approach with other RNN sparsification techniques is still a workinprogress It would be interesting to perform this comparison on larger networks for example for speech recognition task One more curious direction of the research is to sparsify not only recurrent layer but an embedding layer too It may have a lot of parameters in the tasks with large dictionary such as word based language modeling We would like to thank Dmitry Molchanov and Arsenii Ashukha for valuable feedback Nadezhda Chirkova has been supported by Russian Academic Excellence Project 5100 and Ekaterina Lobacheva has been supported by Russian Science Foundation grant 177120072 We would also like to thank the Department of Algorithms and Theory of Programming Faculty of Innovation and High Technology in Moscow Institute of Physics and Technology for provided computational resources </s> |
<s>[INST] Towards a Robust Deep Neural Network in Text Domain A Survey [/INST] Nowadays DNNs have solved masses of significant practical problems in various areas like computer vision BIBREF0 BIBREF1 audio BIBREF2 BIBREF3 natural language processing NLP BIBREF4 BIBREF5 etc Due to the great success systems based on DNN are widely deployed in physical world including some sensitive security tasks However Szegedy et al BIBREF6 found an interesting fact that a crafted input with small perturbations could easily fool DNN models This kind of inputs is called adversarial examples Certainly with the development of theory and practice the definitions of adversarial examples BIBREF6 BIBREF7 BIBREF8 BIBREF9 are varied But these definitions have two cores in common One is that the perturbations are small and the ability of fooling DNN models is the other It naturally raises a question why adversarial examples exist in DNNs The reason why they are vulnerable to adversarial examples is probably because of DNNs linear nature Goodfellow et al BIBREF7 then gave this explanation after adversarial examples arose Researchers therefore treat adversarial examples as a security problem and pay much attention to works of adversarial attacks and defenses BIBREF10 BIBREF11 In recent years category of adversarial examples becomes diverse varying from image to audio and others That means almost all deployed systems based on DNN are under the potential threat of adversarial attacks For example sign recognition system BIBREF12 object recognition system BIBREF13 audio recognition or control system BIBREF14 BIBREF15 BIBREF16 and malware detection system BIBREF17 BIBREF18 are all hard to defend against this kind of attack Of course systems for NLP tasks are also under the threat of adversarial examples like text classification sentiment analysis question answering system recommendation system and so on In real life people are increasingly inclined to search for related comments before shopping eating or watching film and the corresponding items with recommendation score will be given at the same time The higher the score is the more likely it is to be accepted by humans These recommendation apps mainly take advantage of sentiment analysis with others previous comments BIBREF19 Thus attackers could generate adversarial examples based on natural comments to smear competitors see Fig1 for instance or do malicious recommendations for shoddy goods with the purpose of profit or other malicious intents Apart from mentioned above adversarial examples can also poison network environment and hinder detection of malicious information BIBREF20 BIBREF21 BIBREF22 Hence it is significant to know how adversarial attacks conduct and what measures can defend against them to make DNNs more robust This paper presents a comprehensive survey on adversarial attacks and defenses in text domain to make interested readers have a better understanding of this concept It presents the following contributions The remainder of this paper is organized as follows we first give some background about adversarial examples in section Background In section Adversarial Attacks in Text we review the adversarial attacks for text classification and other realworld NLP tasks The researches with the central topic of defense are introduced in section Defenses against Adversarial Attacks in text and Testing and verification as the important defenses against adversarial attacks One of them is on existing defense methods in text and the other is about how to improve the robustness of DNNs from another point of view The discussion and conclusion of the article is in section Discussion of Challenges and Future Direction and Conclusion In this section we describe some research background on the textual adversarial examples including representation of symbol and attack types and scenarios The function of a pretrained text classification model textbf emph F is mapping from input set to the label set For a clean text example emph x it is correctly classified by textbf emph F to ground truth label emph y in textbf emph Y where textbf emph Y including lbrace 1 2 ldots krbrace is a label set of k classes An attacker aims at adding small perturbations in emph x to generate adversarial example emph xprime so that textbf emph Femph x emph yemph y ne emph y Generally speaking a good emph xprime should not only be misclassified by emph x0 but also imperceptible to humans robust to transformations as well as resilient to existing defenses depending on the adversarial goals BIBREF24 Hence constraint conditions eg semantic similarity distance metric etc are appended to make emph x1 be indistinguishable from emph x2 in some works and exploit it to cause classification errors like Fig 1 Why adversarial examples pose greater concern may be due to the fact that adversarial attacks can be easily conducted on DNNs even though attackers have no knowledge of target model Accordingly attacks can be categorized by the level of authorization about the model Blackbox A more detailed division can be done in blackbox attack resulting in blackbox attack with or without probing In the former scenario adversaries can probe target model by observing outputs even if they do not know much about the model This case can also be called a graybox attack In the latter scenario adversaries have little or no knowledge on target model and they can not probe it Under this condition adversaries generally train their own models and utilize the transferability BIBREF7 BIBREF25 of adversarial examples to carry out an attack Whitebox In whitebox attack adversaries have full access to target model and they can know all about architectures parameters and weights of the model Certainly both whitebox and blackbox attacks can not change the model and training data According to the purpose of the adversary adversarial attacks can be categorized as targeted attack and nontargeted attack Targeted attack In this case the generated adversarial example emph xprime is purposeful classified as class t which is the target of an adversary Nontargeted attack In this case the adversary only wants to fool the model and the result emph yprime can be any class except for ground truth emph y There exists an important issue that the generated adversarial texts should not only be able to fool target models but also need to keep the perturbations imperceptible In other words good adversarial examples should convey the same semantic meaning with the original ones so that metric measures are required to ensure this case We describe different kinds of measures to evaluate the utility of adversarial examples in image and text Then we analyze the reasons why metric measures in image are not suitable in text In image almost all recent studies on adversarial attacks adopt Lp distance as a distance metric to quantify the imperceptibility and similarity of adversarial examples The generalized term for Lp distance is as follows Vert triangle x Vert proot p of sum i1n xprime xp Eq 9 where triangle x represents the perturbations This equation is a definition of a set of distances where p could be 0 1 infty and so on Specially L0 BIBREF26 BIBREF27 BIBREF28 L2 BIBREF29 BIBREF28 BIBREF30 BIBREF31 and Linfty BIBREF6 BIBREF7 BIBREF31 BIBREF32 BIBREF33 BIBREF34 are the three most frequently used norms in adversarial images L0 distance evaluates the number of changed pixels before and after modifications It seems like edit distance but it may not directly work in text Because results of altered words in text are varied Some of them are similar to original words and the others may be contrary even though the L0 distance of them is same L2 distance is the Euclidean distance The original Euclidean distance is the beeline from one point to another in Euclidean space As the mapping of image text or others to it Euclidean space becomes a metric space to calculate the similarity between two objects represented as the vector Linfty distance measures the maximum change as follows Vert triangle x Vert infty max x1prime x1ldots xnprime xn Eq 13 Although Linfty distance is thought to be the optimal distance metric to use in some work but it may fail in text The altered words may not exist in pretrained dictionary so that they are considered to be unknown words and their word vectors are also unknown As a result Linfty distance is hard to calculate There are also other metric measureseg structural similarity BIBREF35 perturbation sensitivity BIBREF36 which are typical methods for image Some of them are considered to be more effective than Lp distance but they con not directly used too In order to overcome the metric problem in adversarial texts some measures are presented and we describe five of them which have been demonstrated in the pertinent literature Euclidean Distance In text for two given word vectors vecmm1 m2 ldots mk and vecnn1 n2 ldots nk the Euclidean distance is Dleftvecmvecnrightsqrtm1n12ldots mknk2 Eq 15 Euclidean distance is more used for the metric of adversarial images BIBREF29 BIBREF28 BIBREF30 BIBREF31 than texts with a generalized term called L2 norm or L2 distance Cosine Similarity Cosine similarity is also a computational method for semantic similarity based on word vector by the cosine value of the angle between two vectors Compared with Euclidean distance the cosine distance pays more attention to the difference in direction between two vectors The more consistent the directions of two vectors are the greater the similarity is For two given word vectors vecm and vecn the cosine similarity is Dleftvecm vecnright fracvecm cdot vecnVert m Vert cdot Vert n Vert fracsum limits i1k mi times nisqrtsum limits i1k mi2 times sqrtsum limits i1k ni2 Eq 16 But the limitation is that the dimensions of word vectors must be the same Jaccard Similarity Coefficient For two given sets A and B their Jaccard similarity coefficient is JleftA Bright A cap B A cup B Eq 17 where 0 le JAB le 1 It means that the closer the value of JAB is to 1 the more similar they are In the text intersection A cap B refers to similar words in the examples and union A cup B is all words without duplication Word Movers Distance WMD WMD BIBREF37 is a variation of Earth Movers Distance EMD BIBREF38 It can be used to measure the dissimilarity between two text documents relying on the travelling distance from embedded words of one document to another In other words WMD can quantify the semantic similarity between texts Meanwhile Euclidean distance is also used in the calculation of WMD Edit Distance Edit distance is a way to measure the minimum modifications by turning a string to another The higher it is the more dissimilar the two strings are It can be applied to computational biology and natural language processing Levenshtein distance BIBREF39 is also referred to as edit distance with insertion deletion replacement operations used in work of BIBREF23 In order to make data more accessible to those who need it we collect some datasets which have been applied to NLP tasks in recent literatures and a brief introductions are given at the same time These data sets can be downloaded via the corresponding link in the footnote AGs News footnote httpwwwdiunipiitgulliAGunderline corpusunderline ofunderline newsunderline articleshtml This is a news set with more than one million articles gathered from over 2000 news sources by an academic news search engine named ComeToMyHead The provided db version and xml version can be downloaded for any noncommercial use DBPedia Ontology footnote httpswikidbpediaorgservicesresourcesontology It is a dataset with structured content from the information created in various Wikimedia projects It has over 68 classes with 2795 different properties and now there are more than 4 million instances included in this dataset Amazon Review footnote httpsnapstanfordedudatawebAmazonhtml The Amazon review dataset has nearly 35 million reviews spanning Jun 1995 to March 2013 including product and user information ratings and a plaintext review It is collected by over 6 million users in more than 2 million products and categorized into 33 classes with the size ranging from KB to GB Yahoo Answers footnote httpssourceforgenetprojectsyahoodataset The corpus contains 4 million questions and their answers which can be easily used in the question answer system Besides that a topic classification dataset is also able to be constructed with some main classes Yelp Reviews footnote httpswwwyelpcomdatasetdownload The provided data is made available by Yelp to enable researchers or students to develop academic projects It contains 47 million user reviews with the type of json files and sql files Movie Review MR footnote httpwwwcscornelledupeoplepabomoviereviewdata This is a labeled dataset with respect to sentiment polarity subjective rating and sentences with subjectivity status or polarity Probably because it is labeled by humans the size of this dataset is smaller than others with a maximum of dozens of MB MPQA Opinion Corpus footnote httpmpqacspittedu The MultiPerspective Question Answering MPQA Opinion Corpus is collected from a wide variety of news sources and annotated for opinions or other private states Three different versions are available to people by the MITRE Corporation The higher the version is the richer the contents are Internet Movie Database IMDB footnote httpaistanfordeduamaasdatasentiment IMDBs is crawled from Internet including 50000 positive and negative reviews and average length of the review is nearly 200 words It is usually used for binary sentiment classification including richer data than other similar datasets IMDB also contains the additional unlabeled data raw text and already processed data SNLI Corpus footnote httpsnlpstanfordeduprojectssnli The Stanford Natural Language Inference SNLI Corpus is a collection with manually labeled data mainly for natural language inference NLI task There are nearly five hundred thousand sentence pairs written by humans in a grounded context More details about this corpus can be seen in the research of Samuel et al BIBREF40 Because the purpose of adversarial attacks is to make DNNs misbehave they can be seen as a classification problem in a broad sense And majority of recent representative adversarial attacks in text is related to classification so that we categorize them with this feature In this section we introduce the majority of existing adversarial attacks in text Technical details and corresponding comments of each attack method described below are given to make them more clearly to readers Adversarial attacks can be subdivided in many cases which are described in section Discussion of Challenges and Future Direction With the purpose of more granular division of classification tasks we introduce these attack methods group by group based on the desire of attackers In this part studies below are all nontarget attacks that attackers do not care the category of misclassified results Papernot et al BIBREF41 might be the first to study the problem of adversarial example in text and contributed to producing adversarial input sequences on Recurrent Neural Network RNN They leveraged computational graph unfolding BIBREF42 to evaluate the forward derivative BIBREF26 ie Jacobian with respect to embedding inputs of the word sequences Then for each word of the input fast gradient sign method FGSM BIBREF7 was used on Jacobian tensor evaluated above to find the perturbations Meanwhile in order to solving the mapping problem of modified word embedding they set a special dictionary and chose words to replace the original ones The constraint of substitution operation was that the sign of the difference between replaced and original words was closest to the result by FGSM Although adversarial input sequences can make longshort term memory LSTM BIBREF43 model misbehave words of the input sequences were randomly chosen and there might be grammatical error This was also a FGSMbased method like adversarial input sequence BIBREF41 But difference was that three modification strategies of insertion replacement and deletion were introduced by Samanta et al BIBREF44 to generate adversarial examples by preserving the semantic meaning of inputs as much as possible Premise of these modifications was to calculate the important or salient words which would highly affect classification results if they were removed The authors utilized the concept of FGSM to evaluate the contribution of a word in a text and then targeted the words in the decreasing order of the contribution Except for deletion both insertion and replacement on high ranking words needed candidate pools including synonyms typos and genre special keywords to assist Thus the author built a candidate pool for each word in the experiment However it would consume a great deal of time and the most important words in actual input text might not have candidate pools Unlike previous whitebox methods BIBREF41 BIBREF44 little attention was paid to generate adversarial examples for blackbox attacks on text Gao et al BIBREF23 proposed a novel algorithm DeepWordBug in blackbox scenario to make DNNs misbehave The twostage process they presented were determining which important tokens to change and creating imperceptible perturbations which could evade detection respectively The calculation process for the first stage was as follows beginsplit |
CSxiFx1ldots xi1xiFx1x2ldots xi1lambda Fxixi1ldots xnFxi1ldots xn |
endsplit Eq 23 where emph xi was the ith word in the input and F was a function to evaluate the confidence score Later similar modifications like swap substitution deletion and insertion were applied to manipulate the important tokens to make better adversarial examples Meanwhile in order to preserve the readability of these examples edit distance was used by the authors Different from other methods Sato et al BIBREF45 operated in input embedding space for text and reconstructed adversarial examples to misclassify the target model The core idea of this method was that they searched for the weights of the direction vectors which maximized loss functions with overall parameters W as follows |
alpha iAdvT mathop arg max alpha Vert alpha Vert le epsilon lbrace ell vecw sum k1Vakdk hatY Wrbrace Eq 25 where sum k1Vakdk was the perturbation generated from each input on its word embedding vector vecw and vecd was the direction vector from one word to another in embedding space Because aleph iAdvT in Eq 25 was hard to calculate the authors used Eq 26 instead |
alpha iAdvT fracepsilon gVert g Vert 2 g nabla alpha ell vecw sum k1Vakdk hatY W Eq 26 The loss function of iAdvT was then defined based on aleph iAdvT as an optimization problem by jointly minimizing objection functions on entire training dataset D as follows beginsplit |
hatW frac1Dmathop arg min Wlbrace sum hatXhatYin Dell hatXhatYWlambda sum hatXhatYin Dell hatXgamma alpha iAdvThatYWrbrace |
endsplit Eq 27 Compared with Miyato et al BIBREF46 iAdvText restricted the direction of perturbations to find a substitute which was in the predefined vocabulary rather than an unknown word to replace the origin one Thus it improved the interpretability of adversarial examples by adversarial training The authors also took advantage of cosine similarity to select a better perturbation at the same time Similarly Gong et al BIBREF47 also searched for adversarial perturbations in embedding space but their method was gradientbased Even though WMD was used by the authors to measure the similarity of clean examples and adversarial examples the readability of generated results seemed a little poor Li et al BIBREF48 proposed an attack framework TextBugger for generating adversarial examples to trigger the deep learningbased text understanding system in both blackbox and whitebox settings They followed the general steps to capture important words which were significant to the classification and then crafted on them In whitebox setting Jacobian matrix was used to calculate the importance of each word as follows Cxi JFiy fracpartial Fyxpartial xi Eq 29 where Fycdot represented the confidence value of class y The slight changes of words were in characterlevel and wordlevel respectively by operations like insertion deletion swap and substitution In blackbox setting the authors segmented documents into sequences and probed the target model to filter out sentences with different predicted labels from the original The odd sequences were sorted in an inverse order by their confidence score Then important words were calculated by removing method as follows beginsplit |
Cxi Fyleftx1ldots xi1xixi1ldots xnright Fyleftx1ldots xi1xi1ldots xnright |
endsplit Eq 30 The last modification process was same as that in whitebox setting For target attack attackers purposefully control the category of output to be what they want and the generated examples have similar semantic information with clean ones This kind of attacks are described one by one in the following part Different from works in BIBREF41 BIBREF44 Liang et al BIBREF49 first demonstrated that FGSM could not be directly applied in text Because input space of text is discrete while image data is continuous Continuous image has tolerance of tiny perturbations but text does not have this kind of feature Instead the authors utilized FGSM to determine what where and how to insert remove and modify on text input They conducted two kinds of attacks in different scenarios and used the natural language watermarking BIBREF50 technique to make generated adversarial examples compromise their utilities In whitebox scenario the authors defined the conceptions of hot training phrases and hot sample phrases which were both obtained by leveraging the backpropagation algorithm to compute the cost gradients of samples The former one shed light on what to insert and the later implied where to insert remove and modify In blackbox scenario authors used the idea of fuzzing technique BIBREF51 for reference to obtain hot training phrases and hot sample phrases One assumption was that the target model could be probed Samples were fed to target model and then isometric whitespace was used to substitute origin word each time The difference between two classification results was each words deviation The larger it was the more significant the corresponding word was to its classification Hence hot training phrases were the most frequent words in a set which consisted of the largest deviation word for each training sample And hot sample phrases were the words with largest deviation for every test sample Like one pixel attack BIBREF27 a similar method named HotFlip was proposed by Ebrahimi et al BIBREF52 HotFlip was a whitebox attack in text and it relied on an atomic flip operation to swap one token for another based on gradient computation The authors represented samples as onehot vectors in input space and a flip operation could be represented by |
beginsplit |
vecvijb vec0ldots vec0ldots 00ldots 010ldots 10jldots vec0ivec0ldots |
endsplit Eq 34 The eq 34 means that the jth character of ith word in a sample was changed from a to b which were both characters respectively at ath and bth places in the alphabet The change from directional derivative along this vector was calculated to find the biggest increase in loss emph Jx y as follows max nabla xJx yTcdot vecvijb mathop max ijbfracpartial Jbpartial xij fracpartial Japartial xij Eq 35 where xija1 HotFlip could also be used on characterlevel insertion deletion and wordlevel modification Although HotFlip performed well on characterlevel models only few successful adversarial examples could be generated with one or two flips under the strict constraints Considering the limitation of gradientbased methods BIBREF41 BIBREF44 BIBREF22 BIBREF52 in blackbox case Alzantot et al BIBREF53 proposed a populationbased optimization via genetic algorithm BIBREF54 BIBREF55 to generated semantically similar adversarial examples They randomly selected words in the input and computed their nearest neighbors by Euclidean Distance in GloVe embedding space BIBREF56 These nearest neighbors which did not fit within the surrounding were filtered based on language model BIBREF57 scores and only highranking words with the highest scores were kept The substitute which would maximize probability of the target label was picked from remaining words At the same time aforementioned operations were conducted several times to get a generation If predicted label of modified samples in a generation were not the target label the next generation was generated by randomly choosing two samples as parents each time and the same process was repeated on it This optimization procedure was done to find successful attack by genetic algorithm In this method random selection words in the sequence to substitute were full of uncertainty and they might be meaningless for the target label when changed These attacks above for classification are either popular or representative ones in recent studies Some main attributes of them are summarized in table 1 and instances in these literatures are in appendix A 10httpsiamtraskgithubio20151115anyonecancodelstm 11httpsgithubcomkerasteamkerasblobmasterexamplesimdblstmpy 12httpsgithubcomSmeritykerassnliblobmastersnlirnnpy We have reviewed adversarial attacks for classification task in the previous subsections But what other kinds of tasks or applications can be attacked by adversarial examples How are they generated in these cases and whether the crafted examples can be applied in another way except for attack These questions naturally arise and the answers will be described below In order to know whether reading comprehension systems could really understand language Jia et al BIBREF61 inserted adversarial perturbations into paragraphs to test the systems without changing the true answers or misleading humans They extracted nouns and adjectives in the question and replaced them with antonyms Meanwhile named entities and numbers were changed by the nearest word in GloVe embedding space BIBREF56 The modified question was transformed into declarative sentence as the adversarial perturbation which was concatenated to the end of the original paragraph This process was call ADDSENT by the authors Another process ADDANY was also used to randomly choose any sequence of some words to craft Compared with ADDSENT ADDANY did not consider grammaticality and it needed query the model several times Certainly both two kinds of generated adversarial examples could fool reading comprehension systems well that gave out incorrect answers Mainly because they tried to draw the models attention on the generated sequences Mudrakarta et al BIBREF62 also studied adversarial examples on answering question system and part of their work could strengthen attacks proposed by Jia et al BIBREF61 Besides reading comprehension systems BIBREF61 Minervini et al BIBREF63 cast the generation of adversarial examples which violated the given constraints of FirstOrder Logic FOL in NLI as an optimization problem They maximized the proposed inconsistency loss to search for substitution sets S by using a language model as follows beginsplit |
mathop maximiselimits S JIS leftpSbodypSheadright st log pLSle tau endsplit Eq 42 where xmax 0x and tau was a threshold on the perplexity of generated sequences SX1rightarrow s1ldots Xnrightarrow sn denoted a mapping from X1ldots Xn which was the set of universally quantified variables in a rule to sequences in S pS body and pS head denoted the probability of the given rule after replacing Xi with the corresponding sentence Si The generated sequences which were the adversarial examples helped the authors find weaknesses of NLI systems when faced with linguistic phenomena ie negation and antonymy NMT was another kind of system attacked by adversaries and Belinkov et al BIBREF64 made this attempt They devised blackbox methods depending on natural and synthetic language errors to generate adversarial examples The naturally occurring errors included typos misspelling words or others and synthetic noise was modified by random or keyboard typo types These experiments were done on three different NMT systems BIBREF65 BIBREF66 and results showed that these examples could also effectively fool the target systems The same work was also done by Ebrahimi et al BIBREF67 to conduct an adversarial attack on characterlevel NMT by employing differentiable stringedit operations The method of generating adversarial examples was same in their previous work BIBREF52 Compared with Belinkov et al BIBREF64 the authors demonstrated that blackbox adversarial examples were much weaker than blackbox ones in most cases Iyyer et al BIBREF68 crafted adversarial examples by the use of SCPNS they proposed They designed this model for generating syntactically adversarial examples without decreasing the quality of the input semantics The general process mainly relied on the encoderdecoder architecture of SCPNS Given a sequence and a corresponding target syntax structure the authors encoded them by a bidirectional LSTM model and decoded by LSTM model augmented with soft attention over encoded states BIBREF69 and the copy mechanism BIBREF70 They then modified the inputs to the decoder aiming at incorporating the target syntax structure to generate adversarial examples The syntactically adversarial sentences not only could fool pretrained models but also improved the robustness of them to syntactic variation The authors also used crowdsourced experiment to demonstrate the validity of the generated Apart from attacks adversarial examples were used as a way to measure robustness of DNN models Blohm et al BIBREF71 generated adversarial examples to find out the limitations of a machine reading comprehension model they designed The categories of adversarial examples included wordlevel and sentencelevel attack in different scenarios BIBREF72 By comparing with human performance experiment results showed that some other attributions eg answer by elimination via ranking plausibility BIBREF73 should be added into this model to improve its performance The constant arms race between adversarial attacks and defenses invalidates conventional wisdom quickly BIBREF24 In fact defense is more difficult than attack and few works have been done on this aspect There are two reasons for this situation One is that a good theoretical model do not exist for complicated optimization problems like adversarial examples The other is that tremendous amount of possible inputs may produce the target output with a very high possibility Hence a truly adaptive defense method is difficult In this section we describe some relatively effective methods of defenses against adversarial attacks in text Adversarial examples are also a kind of data with a special purpose The first thing to think about is whether data processing or detecting is useful against adversarial attacks Researchers have done various attempts such as adversarial training and spelling check in text Adversarial training BIBREF7 was a direct approach to defend adversarial images in some studies BIBREF7 BIBREF74 They mixed the adversarial examples with corresponding original examples as training dataset to train the model Adversarial examples could be detected to a certain degree in this way but adversarial training method was not always work In text there were some effects against the attacks after adversarial training BIBREF52 BIBREF23 BIBREF48 However it failed in the work of BIBREF53 mainly because the different ways of generating adversarial examples The modifications of the former were insertion substitution deletion and replacement while the later took use of genetic algorithm to search for adversarial examples Overfitting may be another reason why adversarial training method is not always useful and may be only effective on its corresponding attack This has been confirmed by Tramer et al BIBREF75 in image domain but it remains to be demonstrated in text Another strategy of defense against adversarial attacks is to detect whether input data is modified or not Researchers think that there exists some different features between adversarial example and its clean example For this view a series of work BIBREF76 BIBREF77 BIBREF78 BIBREF79 BIBREF80 has been conducted to detect adversarial examples and performs relatively well in image In text the ways of modification strategy in some methods may produce misspelling words in generated adversarial examples This is a distinct different feature which can be utilized It naturally came up with an idea to detect adversarial examples by checking out the misspelling words Gao et al BIBREF23 used an autocorrector which was the Python autocorrect 030 package before the input And Li et al BIBREF48 took advantage of a contextaware spelling check service to do the same work But experiment results showed that this approach was effective on characterlevel modifications and partly useful on wordlevel operations Meanwhile the availability of different modifications was also different no matter on characterlevel or wordlevel methods Except for adversarial training and spelling checking improving robustness of the model is another way to resist adversarial examples With the purpose of improving the ranking robustness to small perturbations of documents in the adversarial Web retrieval setting Goren et al BIBREF81 formally analyzed defined and quantified the notions of robustness of linear learningtorankbased relevance ranking function They adapted the notions of classification robustness BIBREF6 BIBREF82 to ranking function and defined related concepts of pointwise robustness pairwise robustness and a variance conjecture To quantify the robustness of ranking functions Kendalls tau distance BIBREF83 and top change were used as normalized measures Finally the empirical findings supported the validity of the authors analyses in two families of ranking functions BIBREF84 BIBREF85 The current security situation in DNNs seems to fall into a loop that new adversarial attacks are identified and then followed by new countermeasures which will be subsequently broken BIBREF86 Hence the formal guarantees on DNNs behavior are badly needed But it is a hard work and nobody can ensure that their methods or models are perfect Recently what we could do is to make the threat of adversarial attacks as little as possible The technology of testing and verification helps us deal with the problems from another point of view By the means of it people can know well about the safety and reliability of systems based on DNNs and determine whether to take measures to address security issues or anything else In this section we introduce recent testing and verification methods for enhancing robustness of DNNs against adversarial attacks Even though these methods reviewed below have not applied in text we hope someone interested in this aspect can be inspired and comes up with a good defense method used in text or all areas As increasingly used of DNNs in securitycritical domains it is very significant to have a high degree of trust in the models accuracy especially in the presence of adversarial examples And the confidence to the correct behavior of the model is derived from the rigorous testing in a variety of possible scenarios More importantly testing can be helpful for understanding the internal behaviors of the network contributing to the implementation of defense methods This applies the traditional testing methodology used in DNNs Pei et al BIBREF87 designed a whitebox framework DeepXplore to test realworld DNNs with the metric of neuron coverage and leveraged differential testing to catch the differences of corresponding output between multiple DNNs In this way DeepXplore could trigger the majority logic of the model to find out incorrect behaviors without manual efforts It performed well in the advanced deep learning systems and found thousands of corner cases which would make the systems crash However the limitation of DeepXplore was that if all the DNNs made incorrect judgement it was hard to know where was wrong and how to solve it Different from single neuron coverage BIBREF87 Ma et al BIBREF88 proposed a multigranularity testing coverage criteria to measure accuracy and detect erroneous behaviors They took advantage of four methods BIBREF7 BIBREF26 BIBREF28 BIBREF32 to generate adversarial test data to explore the new internal states of the model The increasing coverage showed that the larger the coverage was the more possible the defects were to be checked out Similar work was done by Budnik et al BIBREF89 to explore the output space of the model under test via an adversarial case generation approach In order to solve the limitation of neuron coverage Kim et al BIBREF90 proposed a Surprise Adequacy for Deep Learning SystemsSADL to test DNNs and developed Surprise CoverageSC to measure the coverage of the range of Surprise AdequacySA values which measured the different behaviors between inputs and training data Experimental results showed that the SA values could be a metric to judge whether an input was adversarial example or not In other hand it could also improve the accuracy of DNNs against adversarial examples by retraining There also exists other kinds of testing method against adversarial examples Wicker et al BIBREF91 presented a featureguided approach to test the resilience of DNNs in blackbox scenario against adversarial examples They treated the process of generating adversarial cases as a twoplayer turnbased stochastic game with the asymptotic optimal strategy based on Monte Carlo tree search MCTS algorithm In this strategy there was an idea of reward for accumulating adversarial examples found over the process of game play and evaluated the robustness against adversarial examples by the use of it Besides the featureguided testing BIBREF91 Sun et al BIBREF92 presented DeepConcolic to evaluate the robustness of wellknown DNNs which was the first attempt to apply traditional concolic testing method for these networks DeepConcolic iteratively used concrete execution and symbolic analysis to generate test suit to reach a high coverage and discovered adversarial examples by a robustness oracle The authors also compared with other testing methods BIBREF87 BIBREF88 BIBREF93 BIBREF94 In terms of input data DeepConcolic could start with a single input to achieve a better coverage or used coverage requirements as inputs In terms of performance DeepConcolic could achieve higher coverage than DeepXplore but run slower than it Researchers think that testing is insufficient to guarantee the security of DNNs especially with unusual inputs like adversarial examples As Edsger W Dijkstra once said testing shows the presence not the absence of bugs Hence verification techniques on DNNs are needed to study more effective defense methods in adversarial settings Pulina et al BIBREF95 might be the first to develop a small verification system for a neural network Since then related work appears one after another But verification of machine learning models robustness to adversarial examples is still in its infancy BIBREF96 There is only a few researches on related aspects We will introduce these works in the following part There are several researches to check security properties against adversarial attacks by diverse kinds of Satisfiability Modulo Theory SMT BIBREF97 solvers Katz et al BIBREF98 presented a novel system named Reluplex to verify DNNs by splitting the problem into the LP problems with Rectified Linear Unit ReLU BIBREF99 activation functions based on SMT solver Reluplex could be used to find adversarial inputs with the local adversarial robustness feature on the ACAS Xu networks but it failed on large networks on the global variant Huang et al BIBREF100 proposed a new verification framework which was also based on SMT to verify neural network structures It relied on discretizing search space and analyzing output of each layer to search for adversarial perturbations but the authors found that SMT theory could only suitable for small networks in practice On the other hand this framework was limited by many assumptions and some of functions in it were unclear For ReLU networks a part of researches regarded the verification as a Mixed Integer Linear Programming MILP problem such as Tjeng et al BIBREF101 They evaluated robustness to adversarial examples from two aspects of minimum adversarial distortion BIBREF102 and adversarial test accuracy BIBREF103 Their work was faster than Reluplex with a high adversarial test accuracy but the same limitation was that it remained a problem to scale it to large networks Different from other works Narodytska et al BIBREF104 verify the secure properties on the binarized neural networksBNNs BIBREF105 They were the first to utilize exact Boolean encoding on a network to study its robustness and equivalence The inputs would be judged whether they were adversarial examples or not by two encoding structures Gen and Ver It could easily find adversarial examples for up to 95 percent of considered images on the MNIST dataset and also worked on the middlesized BNNs rather than large networks There is a different point of view that the difficulty in proving properties about DNNs is caused by the presence of activation functions BIBREF98 Some researchers pays more attention to them for exploring better verification methods Gehr et al BIBREF106 introduced abstract transformers which could get the outputs of layers in convolutional neural network with ReLU including fully connected layer The authors evaluated this approach on verifying robustness of DNNs such as pretrained defense network BIBREF107 Results showed that FGSM attack could be effectively prevented They also did some comparisons with Reluplex on both small and large networks The stareoftheart Reluplex performed worse than it in verification of properties and time consumption Unlike existing solverbased methods eg SMT Wang et al BIBREF108 presented ReluVal which leveraged interval arithmetic BIBREF109 to guarantee the correct operations of DNNs in the presence of adversarial examples They repeatedly partitioned input intervals to find out whether the corresponding output intervals violated security property or not By contrast this method was more effective than Reluplex and performed well on finding adversarial inputs Weng et al BIBREF110 designed two kinds of algorithm to evaluate lower bounds of minimum adversarial distortion via linear approximations and bounding the local Lipschitz constant Their methods could be applied into defended networks especially for adversarial training to evaluate the effectiveness of them In the previous sections a detailed description of adversarial examples on attack and defense was given to enable readers to have a faster and better understanding of this respect Next we present more general observations and discuss challenges on this direction based on the aforementioned contents Judgement on the performance of attack methods Generally authors mainly evaluate their attacks on target models by accuracy rate or error rate The lower the accuracy rate is the more effective the adversarial examples are And the use of error rate is the opposite Certainly some researchers prefer to utilize the difference in accuracy before and after attacks because it can show the effect of attacks more intuitively And these criterions can also used in defending of adversarial examples Reasons by using misspelled words in some methods The motivation by using misspelled words is similar to that in image which aims at fooling target models with indiscernible perturbations Some methods tend to conduct characterlevel modification operations which highly result in misspelled words And humans are extremely robust against that case in written language BIBREF111 Transferability in blackbox scenario When the adversaries have no access including probing to the target models they train a substitute model and utilize the transferability of adversarial examples Szegedy et al BIBREF6 first found that adversarial examples generated from a neural network could also make another model misbehave by different datasets This reflects the transferability of the adversarial eample As a result adversarial examples generated in the substitute model are used to attack the target models while models and datasets are all inaccessible Apart from that constructing adversarial examples with high transferability is a prerequisite to evaluate the effectiveness of blackbox attacks and a key metric to evaluate generalized attacks BIBREF112 The lack of a universal approach to generate adversarial examples Because the application of adversarial examples in text rose as a frontier in recent years the methods of adversarial attacks were relatively few let alone defenses The another reason why this kind of method do not exist is the language problem Almost all recent methods use English dataset and the generated adversarial examples may be useless to the systems with Chinese or other language dataset Thus there is not a universal approach to generate adversarial examples But in our observations many methods mainly follow a twostep process to generate adversarial examples The first step is to find important words which have significant impact on classification results and then homologous modifications are used to get adversarial examples Difficulties on adversarial attacks and defenses There are many reasons for this question and one of the main reasons is that there is not a straightforward way to evaluate proposed works no matter attack or defense Namely the convincing benchmarks do not exist in recent works One good performed attack method in a scenario may failed in another or new defense will soon be defeated in the way beyond defenders anticipation Even though some works are provably sound but rigorous theoretical supports are still needed to deal with the problem of adversarial examples Appropriate future directions on adversarial attacks and defenses As an attacker designing universal perturbations to catch better adversarial examples can be taken into consideration like it works in image BIBREF29 A universal adversarial perturbation on any text is able to make a model misbehave with high probability Moreover more wonderful universal perturbations can fool multimodels or any model on any text On the other hand the work of enhancing the transferability of adversarial examples is meaningful in more practical backbox attacks On the contrary defenders prefer to completely revamp this vulnerability in DNNs but it is no less difficult than redesigning a network and is also a long and arduous task with the common efforts of many people At the moment defender can draw on methods from image area to text for improving the robustness of DNNs eg adversarial training BIBREF107 adding extra layer BIBREF113 optimizing crossentropy function BIBREF114 BIBREF115 or weakening the transferability of adversarial examples This article presents a survey about adversarial attacks and defenses on DNNs in text Even though DNNs have the high performance on a wide variety of NLP they are inherently vulnerable to adversarial examples which lead to a high degree concern about it This article integrates almost existing adversarial attacks and some defenses focusing on recent works in the literature From these works we can see that the threat of adversarial attacks is real and defense methods are few Most existing works have their own limitations such as application scene constraint condition and problems with the method itself More attention should be paid on the problem of adversarial example which remains an open issue for designing considerably robust models against adversarial attacks This work was partly supported by NSFC under No 61876134 the National Key RD Program of China under No 2016YFB0801100 NSFC under U1536204 and U183610015 </s> |
<s>[INST] Deep Contextualized Acoustic Representations For Semi-Supervised Speech Recognition [/INST] Current stateoftheart models for speech recognition require vast amounts of transcribed audio data to attain good performance In particular endtoend ASR models are more demanding in the amount of training data required when compared to traditional hybrid models While obtaining a large amount of labeled data requires substantial effort and resources it is much less costly to obtain abundant unlabeled data For this reason semisupervised learning SSL is often used when training ASR systems The most commonlyused SSL approach in ASR is selftraining BIBREF0 BIBREF1 BIBREF2 BIBREF3 BIBREF4 In this approach a smaller labeled set is used to train an initial seed model which is applied to a larger amount of unlabeled data to generate hypotheses The unlabeled data with the most reliable hypotheses are added to the training data for retraining This process is repeated iteratively However selftraining is sensitive to the quality of the hypotheses and requires careful calibration of the confidence measures Other SSL approaches include pretraining on a large amount of unlabeled data with restricted Boltzmann machines RBMs BIBREF5 entropy minimization BIBREF6 BIBREF7 BIBREF8 where the uncertainty of the unlabeled data is incorporated as part of the training objective and graphbased approaches BIBREF9 where the manifold smoothness assumption is exploited Recently transfer learning from largescale pretrained language models LMs BIBREF10 BIBREF11 BIBREF12 has shown great success and achieved stateoftheart performance in many NLP tasks The core idea of these approaches is to learn efficient word representations by pretraining on massive amounts of unlabeled text via word completion These representations can then be used for downstream tasks with labeled data Inspired by this we propose an SSL framework that learns efficient contextaware acoustic representations using a large amount of unlabeled data and then applies these representations to ASR tasks using a limited amount of labeled data In our implementation we perform acoustic representation learning using forward and backward LSTMs and a training objective that minimizes the reconstruction error of a temporal slice of filterbank features given previous and future context frames After pretraining we fix these parameters and add output layers with connectionist temporal classification CTC loss for the ASR task The paper is organized as follows in Section SECREF2 we give a brief overview of related work in acoustic representation learning and SSL In Section SECREF3 we describe an implementation of our SSL framework with DeCoAR learning We describe the experimental setup in Section SECREF4 and the results on WSJ and LibriSpeech in Section SECREF5 followed by our conclusions in Section SECREF6 While semisupervised learning has been exploited in a plethora of works in hybrid ASR system there are very few work done in the endtoend counterparts BIBREF3 BIBREF13 BIBREF14 In BIBREF3 an intermediate representation of speech and text is learned via a shared encoder network To train these representation the encoder network was trained to optimize a combination of ASR loss texttotext autoencoder loss and interdomain loss The latter two loss functions did not require paired speech and text data Learning efficient acoustic representation can be traced back to restricted Boltzmann machine BIBREF15 BIBREF16 BIBREF17 which allows pretraining on large amounts of unlabeled data before training the deep neural network acoustic models More recently acoustic representation learning has drawn increasing attention BIBREF18 BIBREF19 BIBREF20 BIBREF21 BIBREF22 BIBREF23 in speech processing For example an autoregressive predictive coding model APC was proposed in BIBREF20 for unsupervised speech representation learning and was applied to phone classification and speaker verification WaveNet autoencoders BIBREF21 proposed contrastive predictive coding CPC to learn speech representations and was applied on unsupervised acoustic unit discovery task Wav2vec BIBREF22 proposed a multilayer convolutional neural network optimized via a noise contrastive binary classification and was applied to WSJ ASR tasks Unlike the speech representations described in BIBREF22 BIBREF20 our representations are optimized to use bidirectional contexts to autoregressively reconstruct unseen frames Thus they are deep contextualized representations that are functions of the entire input sentence More importantly our work is a general semisupervised training framework that can be applied to different systems and requires no architecture change Our approach is largely inspired by ELMo BIBREF10 In ELMo given a sequence of T tokens w1w2wT a forward language model implemented with an LSTM computes its probability using the chain rule decomposition Similarly a backward language model computes the sequence probability by modeling the probability of token wt given its future context wt1cdots wT as follows ELMo is trained by maximizing the joint loglikelihood of both forward and backward language model probabilities where Theta x is the parameter for the token representation layer Theta s is the parameter for the softmax layer and overrightarrowTheta textLSTM overleftarrowTheta textLSTM are the parameters of forward and backward LSTM layers respectively As the word representations are learned with neural networks that use past and future information they are referred to as deep contextualized word representations For speech processing predicting a single frame mathbf xt may be a trivial task as it could be solved by exploiting the temporal smoothness of the signal In the APC model BIBREF20 the authors propose predicting a frame K steps ahead of the current one Namely the model aims to minimize the ell 1 loss between an acoustic feature vector mathbf x at time tK and a reconstruction mathbf y predicted at time t sum t1TK mathbf xtK mathbf yt They conjectured this would induce the model to learn more global structure rather than simply leveraging local information within the signal We propose combining the bidirectionality of ELMo and the reconstruction objective of APC to give deep contextualized acoustic representations DeCoAR We train the model to predict a slice of K acoustic feature vectors given past and future acoustic vectors As depicted on the left side of Figure FIGREF1 a stack of forward and backward LSTMs are applied to the entire unlabeled input sequence mathbf X mathbf x1cdots mathbf xT The network computes a hidden representation that encodes information from both previous and future frames ie overrightarrowmathbf zt overleftarrowmathbf zt for each frame mathbf xt Given a sequence of acoustic feature inputs mathbf x1 mathbf xT in mathbb Rd for each slice mathbf xt mathbf xt1 mathbf xtK starting at time step t our objective is defined as follows where overrightarrowmathbf zt overleftarrowmathbf zt in mathbb R2h are the concatenated forward and backward states from the last LSTM layer and is a positiondependent feedforward network with 512 hidden dimensions The final loss mathcal L is summed over all possible slices in the entire sequence Note this can be implemented efficiently as a layer which predicts these K1 frames at each position t all at once We compare with the use of unidirectional LSTMs and various slice sizes in Section SECREF5 After we have pretrained the DeCoAR on unlabeled data we freeze the parameters in the architecture To train an endtoend ASR system using labeled data we remove the reconstruction layer and add two BLSTM layers with CTC loss BIBREF24 as illustrated on the right side of Figure FIGREF1 The DeCoAR vectors induced by the labeled data in the forward and backward layers are concatenated We finetune the parameters of this ASRspecific new layer on the labeled data While we use LSTMs and CTC loss in our implementation our SSL approach should work for other layer choices eg TDNN CNN selfattention and other downstream ASR models eg hybrid seq2seq RNN transducers as well We conducted our experiments on the WSJ and LibriSpeech datasets pretraining by using one of the two training sets as unlabeled data To simulate the SSL setting in WSJ we used 30 50 as well as 100 of labeled data for ASR training consisting of 25 hours 40 hours and 81 hours respectively We used dev93 for validation and eval92 and evaluation For LibriSpeech the amount of training data used varied from 100 hours to the entire 960 hours We used devclean for validation and testclean testother for evaluation Our experiments consisted of three different setups 1 a fullysupervised system using all labeled data 2 an SSL system using wav2vec features 3 an SSL system using our proposed DeCoAR features All models used were based on deep BLSTMs with the CTC loss criterion In the supervised ASR setup we used conventional logmel filterbank features which were extracted with a 25ms sliding window at a 10ms frame rate The features were normalized via mean subtraction and variance normalization on a perspeaker basis The model had 6 BLSTM layers with 512 cells in each direction We found that increasing the number of cells to a larger number did not further improve the performance and thus used it as our best supervised ASR baseline The output CTC labels were 71 phonemes plus one blank symbol In the SSL ASR setup we pretrained a 4layer BLSTM 1024 cells per sublayer to learn DeCoAR features according to the loss defined in Equation DISPLAYFORM4 and use a slice size of 18 We optimized the network with SGD and use a Noam learning rate schedule where we started with a learning rate of 0001 gradually warm up for 500 updates and then perform inverse squareroot decay We grouped the input sequences by length with a batch size of 64 and trained the models on 8 GPUs After the representation network was trained we froze the parameters and added a projection layer followed by 2layer BLSTM with CTC loss on top it We fed the labeled data to the network For comparison we obtained 512dimensional wav2vec representations BIBREF22 from the wav2veclarge model Their model was pretrained on 960hour LibriSpeech data with constrastive loss and had 12 convolutional layers with skip connections For evaluation purposes we applied WFSTbased decoding using EESEN BIBREF25 We composed the CTC labels lexicons and language models unpruned trigram LM for WSJ 4gram for LibriSpeech into a decoding graph The acoustic model score was set to 08 and 10 for WSJ and LibriSpeech respectively and the blank symbol prior scale was set to 03 for both tasks We report the performance in word error rate WER Table TABREF14 shows our results on semisupervised WSJ We demonstrate that DeCoAR feature outperforms filterbank and wav2vec features with a relative improvement of 42 and 20 respectively The lower part of the table shows that with smaller amounts of labeled data the DeCoAR features are significantly better than the filterbank features Compared to the system trained on 100 labeled data with filterbank features we achieve comparable results on eval92 using 30 of the labeled data and better performance on eval92 using 50 of the labeled data Table TABREF7 shows the results on semisupervised LibriSpeech Both our representations and wav2vecBIBREF22 are trained on 960h LibriSpeech data We conduct our semisupervised experiments using 100h trainclean100 360h trainclean360 460h and 960h of training data Our approach outperforms both the baseline and wav2vec model in each SSL scenario One notable observation is that using only 100 hours of transcribed data achieves very similar performance to the system trained on the full 960hour data with filterbank features On the more challenging testother dataset we also achieve performance on par with the filterbank baseline using a 360h subset Furthermore training with with our DeCoAR features approach improves the baseline even when using the exact same training data 960h Note that while BIBREF26 introduced SpecAugment to significantly improve LibriSpeech performance via data augmentation and BIBREF27 achieved stateoftheart results using both hybrid and endtoend models our approach focuses on the SSL case with less labeled training data via our DeCoAR features We study the effect of the context window size during pretraining Table TABREF20 shows that masking and predicting a larger slice of frames can actually degrade performance while increasing training time A similar effect was found in SpanBERT BIBREF28 another deep contextual word representation which found that masking a mean span of 38 consecutive words was ideal for their word reconstruction objective Next we study the importance of bidirectional context by training a unidirectional LSTM which corresponds to only using overrightarrowmathbf zt to predict mathbf xt cdots mathbf xtK Table TABREF22 shows that this unidirectional model achieves comparable performance to the wav2vec model BIBREF22 suggesting that bidirectionality is the largest contributor to DeCoARs improved performance Since our model is trained by predicting masked frames DeCoAR has the side effect of learning decoder feedforward networks textFFNi which reconstruct the tith filterbank frame from contexts overrightarrowmathbf zt and overleftarrowmathbf ztK In this section we consider the spectrogram reconstructed by taking the output of textFFNi at all times t The qualitative result is depicted in Figure FIGREF15 where the slice size is 18 We see that when i0 ie when reconstructing the tth frame from overrightarrowmathbf zt overleftarrowmathbf ztK the reconstruction is almost perfect However as soon as one predicts unseen frames i1 4 8 of 16 the reconstruction becomes more simplistic but not by much Background energy in the silent frames 510550 is zeroed out By i8 artifacts begin to occur such as an erroneous sharp band of energy being predicted around frame 555 This behavior is compatible with recent NLP works that interpret contextual word representations as denoising autoencoders BIBREF12 The surprising ability of DeCoAR to broadly reconstruct a frame overrightarrowmathbf xtK2 in the middle of a missing 16frame slice suggests that its representations overrightarrowmathbf zt overleftarrowmathbf ztK capture longerterm phonetic structure during unsupervised pretraining as with APC BIBREF20 This motivates its success in the semisupervised ASR task with only two additional layers as it suggests DeCoAR learns phonetic representations similar to those likely learned by the first 4 layers of a corresponding endtoend ASR model In this paper we introduce a novel semisupervised learning approach for automatic speech recognition We first propose a novel objective for a deep bidirectional LSTM network where large amounts of unlabeled data are used to learn deep contextualized acoustic representations DeCoAR These DeCoAR features are used as the representations of labeled data to train a CTCbased endtoend ASR model In our experiments we show a 42 relative improvement on WSJ compared to a baseline trained on logmel filterbank features On LibriSpeech we achieve similar performance to training on 960 hours of labeled by pretraining then using only 100 hours of labeled data While we use BLSTMCTC as our ASR model our approach can be applied to other endtoend ASR models </s> |
<s>[INST] Deep Neural Machine Translation with Linear Associative Unit [/INST] Neural Machine Translation NMT is an endtoend learning approach to machine translation which has recently shown promising results on multiple language pairs BIBREF0 BIBREF1 BIBREF2 BIBREF3 BIBREF4 BIBREF5 BIBREF6 BIBREF7 Unlike conventional Statistical Machine Translation SMT systems BIBREF8 BIBREF9 BIBREF10 BIBREF11 BIBREF12 which consist of multiple separately tuned components NMT aims at building upon a single and large neural network to directly map input text to associated output text Typical NMT models consists of two recurrent neural networks RNNs an encoder to read and encode the input text into a distributed representation and a decoder to generate translated text conditioned on the input representation BIBREF13 BIBREF14 Driven by the breakthrough achieved in computer vision BIBREF15 BIBREF16 research in NMT has recently turned towards studying Deep Neural Networks DNNs Wu et al wu2016google and Zhou et al zhou2016deep found that deep architectures in both the encoder and decoder are essential for capturing subtle irregularities in the source and target languages However training a deep neural network is not as simple as stacking layers Optimization often becomes increasingly difficult with more layers One reasonable explanation is the notorious problem of vanishingexploding gradients which was first studied in the context of vanilla RNNs BIBREF17 Most prevalent approaches to solve this problem rely on shortcut connections between adjacent layers such as residual or fastforward connections BIBREF15 BIBREF16 BIBREF18 Different from previous work we choose to reduce the gradient path inside the recurrent units and propose a novel Linear Associative Unit LAU which creates a fusion of both linear and nonlinear transformations of the input Through this design information can flow across several steps both in time and in space with little attenuation The mechanism makes it easy to train deep stack RNNs which can efficiently capture the complex inherent structures of sentences for NMT Based on LAUs we also propose a NMT model called DeepLAU with deep architecture in both the encoder and decoder Although DeepLAU is fairly simple it gives remarkable empirical results On the NIST ChineseEnglish task DeepLAU with proper settings yields the best reported result and also a 49 BLEU improvement over a strong NMT baseline with most known techniques eg dropout incorporated On WMT EnglishGerman and EnglishFrench tasks it also achieves performance superior or comparable to the stateoftheart A typical neural machine translation system is a single and large neural network which directly models the conditional probability INLINEFORM0 of translating a source sentence INLINEFORM1 to a target sentence INLINEFORM2 Attentionbased NMT with RNNsearch as its most popular representative generalizes the conventional notion of encoderdecoder in using an array of vectors to represent the source sentence and dynamically addressing the relevant segments of them during decoding The process can be explicitly split into an encoding part a decoding part and an attention mechanism The model first encodes the source sentence INLINEFORM0 into a sequence of vectors INLINEFORM1 In general INLINEFORM2 is the annotation of INLINEFORM3 from a bidirectional RNN which contains information about the whole sentence with a strong focus on the parts of INLINEFORM4 Then the RNNsearch model decodes and generates the target translation INLINEFORM5 based on the context INLINEFORM6 t INLINEFORM7 pyiyi INLINEFORM8 is dynamically obtained according to the contribution of the source annotation made to the word prediction This is called automatic alignment BIBREF14 or attention mechanism BIBREF0 but it is essentially reading with contentbased addressing defined in BIBREF19 With this addressing strategy the decoder can attend to the source representation that is most relevant to the stage of decoding Deep neural models have recently achieved a great success in a wide range of problems In computer vision models with more than 100 convolutional layers have outperformed shallow ones by a big margin on a series of image tasks BIBREF15 BIBREF16 Following similar ideas of building deep CNNs some promising improvements have also been achieved on building deep NMT systems Zhou et al zhou2016deep proposed a new type of linear connections between adjacent layers to simplify the training of deeply stacked RNNs Similarly Wu et al wu2016google introduced residual connections to their deep neural machine translation system and achieve great improvements However the optimization of deep RNNs is still an open problem due to the massive recurrent computation which makes the gradient propagation path extremely tortuous In this section we discuss Linear Associative Unit LAU to ease the training of deep stack of RNNs Based on this idea we further propose DeepLAU a neural machine translation model with a deep encoder and decoder A recurrent neural network BIBREF20 is a class of neural network that has recurrent connections and a state or its more sophisticated memorylike extension The past information is built up through the recurrent connections This makes RNN applicable for sequential prediction tasks of arbitrary length Given a sequence of vectors INLINEFORM0 as input a standard RNN computes the sequence hidden states INLINEFORM1 by iterating the following equation from INLINEFORM2 to INLINEFORM3 DISPLAYFORM0 INLINEFORM0 is usually a nonlinear function such as composition of a logistic sigmoid with an affine transformation It is difficult to train RNNs to capture longterm dependencies because the gradients tend to either vanish most of the time or explode The effect of longterm dependencies is dropped exponentially with respect to the gradient propagation length The problem was explored in depth by BIBREF21 BIBREF17 A successful approach is to design a more sophisticated activation function than a usual activation function consisting of gating functions to control the information flow and reduce the propagation path There is a long thread of work aiming to solve this problem with the long shortterm memory units LSTM being the most salient examples and gated recurrent unit GRU being the most recent one BIBREF21 BIBREF22 RNNs employing either of these recurrent units have been shown to perform well in tasks that require capturing longterm dependencies GRU can be viewed as a slightly more dramatic variation on LSTM with fewer parameters The activation function is armed with two specifically designed gates called update and reset gates to control the flow of information inside each hidden unit Each hidden state at timestep INLINEFORM0 is computed as follows DISPLAYFORM0 For ChineseEnglish our training data consists of INLINEFORM0 M sentence pairs extracted from LDC corpora with INLINEFORM1 M Chinese words and INLINEFORM2 M English words respectively We choose NIST 2002 MT02 dataset as our development set and the NIST 2003 MT03 2004 MT04 2005 MT05 and 2006 MT06 datasets as our test sets For EnglishGerman to compare with the results reported by previous work BIBREF0 BIBREF18 BIBREF6 we used the same subset of the WMT 2014 training corpus that contains 45M sentence pairs with 91M English words and 87M German words The concatenation of newstest 2012 and newstest 2013 is used as the validation set and newstest 2014 as the test set To evaluate at scale we also report the results of EnglishFrench To compare with the results reported by previous work on endtoend NMT BIBREF13 BIBREF14 BIBREF6 BIBREF23 BIBREF18 we used the same subset of the WMT 2014 training corpus that contains 12M sentence pairs with 304M English words and 348M French words The concatenation of newstest 2012 and newstest 2013 serves as the validation set and newstest 2014 as the test set Our training procedure and hyper parameter choices are similar to those used by BIBREF14 In more details we limit the source and target vocabularies to the most frequent INLINEFORM0 words in both ChineseEnglish and EnglishFrench For EnglishGerman we set the source and target vocabularies size to INLINEFORM1 and INLINEFORM2 respectively For all experiments the dimensions of word embeddings and recurrent hidden states are both set to 512 The dimension of INLINEFORM0 is also of size 512 Note that our network is more narrow than most previous work where hidden states of dimmention 1024 is used we initialize parameters by sampling each element from the Gaussian distribution with mean 0 and variance INLINEFORM1 Parameter optimization is performed using stochastic gradient descent Adadelta BIBREF24 is used to automatically adapt the learning rate of each parameter INLINEFORM0 and INLINEFORM1 To avoid gradient explosion the gradients of the cost function which had INLINEFORM2 norm larger than a predefined threshold INLINEFORM3 were normalized to the threshold BIBREF25 We set INLINEFORM4 to INLINEFORM5 at the beginning and halve the threshold until the BLEU score does not change much on the development set Each SGD is a minibatch of 128 examples We train our NMT model with the sentences of length up to 80 words in the training data while for the Moses system we use the full training data Translations are generated by a beam search and loglikelihood scores are normalized by sentence length We use a beam width of 10 in all the experiments Dropout was also applied on the output layer to avoid overfitting The dropout rate is set to INLINEFORM6 Except when otherwise mentioned NMT systems are have 4 layers encoders and 4 layers decoders Table TABREF7 shows BLEU scores on ChineseEnglish datasets Clearly DeepLAU leads to a remarkable improvement over their competitors Compared to DeepGRU DeepLAU is INLINEFORM0 BLEU score higher on average four test sets showing the modeling power gained from the liner associative connections We suggest it is because LAUs apply adaptive gate function conditioned on the input which make it able to automatically decide how much linear information should be transferred to the next step To show the power of DeepLAU we also make a comparison with previous work Our best single model outperforms both a phrasedbased MT system Moses as well as an open source attentionbased NMT system Groundhog by INLINEFORM0 and INLINEFORM1 BLEU points respectively on average The result is also better than some other stateoftheart variants of attentionbased NMT mode with big margins After PosUnk and ensemble DeepLAU seizes another notable gain of INLINEFORM2 BLEU and outperform Moses by INLINEFORM3 BLEU The results on EnglishGerman translation are presented in Table TABREF10 We compare our NMT systems with various other systems including the winning system in WMT14 BIBREF26 a phrasebased system whose language models were trained on a huge monolingual text the Common Crawl corpus For endtoend NMT systems to the best of our knowledge Wu et al wu2016google is currently the SOTA system and about 4 BLEU points on top of previously best reported results even though Zhou et al zhou2016deep used a much deeper neural network Following Wu et al wu2016google the BLEU score represents the averaged score of 8 models we trained Our approach achieves comparable results with SOTA system As can be seen from the Table TABREF10 DeepLAU performs better than the word based model and even not much worse than the best wordpiece models achieved by Wu et al wu2016google Note that DeepLAU are simple and easy to implement as opposed to previous models reported in Wu et al wu2016google which dependends on some external techniques to achieve their best performance such as their introduction of length normalization coverage penalty finetuning and the RLrefined model To evaluate at scale we also show the results on an EnglishFrench task with INLINEFORM0 sentence pairs and INLINEFORM1 vocabulary in Table TABREF13 Luong et al luong2014addressing achieves BLEU score of INLINEFORM2 with a six layers deep EncoderDecoder model The two attention models RNNSearch and RNNsearchLV achieve BLEU scores of INLINEFORM3 and INLINEFORM4 respectively The previous best single NMT DeepAtt model with an 18 layers encoder and 7 layers decoder achieves BLEU score of INLINEFORM5 For DeepLAU we obtain the BLEU score of INLINEFORM6 with a 4 layers encoder and 4 layers decoder which is on par with the SOTA system in terms of BLEU Note that Zhou et al zhou2016deep utilize a much larger depth as well as external alignment model and extensive regularization to achieve their best results Then we will study the main factors that influence our results on NIST ChineseEnglish translation task We also compare our approach with two SOTA topologies which were used in building deep NMT systems Residual Networks ResNet are among the pioneering works BIBREF27 BIBREF28 that utilize extra identity connections to enhance information flow such that very deep neural networks can be effectively optimized Share the similar idea Wu et al wu2016google introduced to leverage residual connections to train deep RNNs Fast Forward FF connections were proposed to reduce the propagation path length which is the pioneer work to simplify the training of deep NMT model BIBREF18 The work can be viewed as a parametric ResNet with short cut connections between adjacent layers The procedure takes a linear sum between the input and the newly computed state Table TABREF18 shows the effect of the novel LAU By comparing row 3 to row 7 we see that when INLINEFORM0 and INLINEFORM1 are set to 2 the average BLEU scores achieved by DeepGRU and DeepLAU are INLINEFORM2 and INLINEFORM3 respectively LAU can bring an improvement of INLINEFORM4 in terms of BLEU After increasing the model depth to 4 row 4 and row 6 the improvement is enlarged to INLINEFORM5 When DeepGRU is trained with larger depth say 4 the training becomes more difficult and the performance falls behind its shallow partner While for DeepLAU as can be see in row 9 with increasing the depth even to INLINEFORM6 and INLINEFORM7 we can still obtain growth by INLINEFORM8 BLEU score Compared to previous shortcut connection methods row 5 and row 6 The LAU still achieve meaningful improvements over FF connections and Residual connections by INLINEFORM9 and INLINEFORM10 BLEU points respectively DeepLAU introduces more parameters than DeepGRU In order to figure out the effect of DeepLAU comparing models with the same parameter size we increase the hidden size of DeepGRU model Row 3 shows that after using a twice larger GRU layer the BLEU score is INLINEFORM0 which is still worse than the corresponding DeepLAU model with fewer parameters Next we will study the model size In Table TABREF18 starting from INLINEFORM0 and INLINEFORM1 and gradually increasing the model depth we can achieve substantial improvements in terms of BLEU With INLINEFORM2 and INLINEFORM3 our DeepLAU model yields the best BLEU score We tried to increase the model depth with the same hidden size but failed to see further improvements We then tried to increase the hidden size By comparing row 2 and row 3 we find the improvements is relative small with a wider hidden size It is also worth mentioning that a deep and thin network with fewer parameters can still achieve comparable results with its shallow partner This suggests that depth plays a more important role in increasing the complexity of neural networks than width and our deliberately designed LAU benefit from the optimizing of such a deep model A more detailed comparison between DeepLAU 4 layers encoder and 4 layers decoder DeepLAU2 layer encoder and 2 layer decoder and DeepGRU 4 layers encoder and 4 layers decoder suggest that with deep architectures are essential to the superior performance of our system In particular we test the BLEU scores on sentences longer than INLINEFORM0 on the merged test set Clearly in all curves performance degrades with increased sentence length However DeepLAU models yield consistently higher BLEU scores than the DeepGRU model on longer sentences These observations are consistent with our intuition that very deep RNN model is especially good at modeling the nested latent structures on relatively complicated sentences and LAU plays an important role on optimizing such a complex deep model We propose a Linear Associative Unit LAU which makes a fusion of both linear and nonlinear transformation inside the recurrent unit On this way gradients decay much slower compared to the standard deep networks which enable us to build a deep neural network for machine translation Our empirical study shows that it can significantly improve the performance of NMT We sincerely thank the anonymous reviewers for their thorough reviewing and valuable suggestions Wangs work is partially supported by National Science Foundation for Deep Semantics Based Uighur to Chinese Machine Translation ID 61662077 Qun Lius work is partially supported by Science Foundation Ireland in the ADAPT Centre for Digital Content Technology wwwadaptcentreie at Dublin City University funded under the SFI Research Centres Programme Grant 13RC2106 cofunded under the European Regional Development Fund </s> |
<s>[INST] Graph Neural Networks with Generated Parameters for Relation Extraction [/INST] Recent years graph neural networks GNNs have been applied to various fields of machine learning including node classification BIBREF0 relation classification BIBREF1 molecular property prediction BIBREF2 fewshot learning BIBREF3 and achieve promising results on these tasks These works have demonstrated GNNs strong power to process relational reasoning on graphs Relational reasoning aims to abstractly reason about entitiesobjects and their relations which is an important part of human intelligence Besides graphs relational reasoning is also of great importance in many natural language processing tasks such as question answering relation extraction summarization etc Consider the example shown in Fig 1 existing relation extraction models could easily extract the facts that Luc Besson directed a film Lon The Professional and that the film is in English but fail to infer the relationship between Luc Besson and English without multihop relational reasoning By considering the reasoning patterns one can discover that Luc Besson could speak English following a reasoning logic that Luc Besson directed Lon The Professional and this film is in English indicates Luc Besson could speak English However most existing GNNs can only process multihop relational reasoning on predefined graphs and cannot be directly applied in natural language relational reasoning Enabling multihop relational reasoning in natural languages remains an open problem To address this issue in this paper we propose graph neural networks with generated parameters GPGNNs to adapt graph neural networks to solve the natural language relational reasoning task GPGNNs first constructs a fullyconnected graph with the entities in the sequence of text After that it employs three modules to process relational reasoning 1 an encoding module which enables edges to encode rich information from natural languages 2 a propagation module which propagates relational information among various nodes and 3 a classification module which makes predictions with node representations As compared to traditional GNNs GPGNNs could learn edges parameters from natural languages extending it from performing inferring on only nonrelational graphs or graphs with a limited number of edge types to unstructured inputs such as texts In the experiments we apply GPGNNs to a classic natural language relational reasoning task relation extraction from text We carry out experiments on Wikipedia corpus aligned with Wikidata knowledge base BIBREF4 and build a human annotated test set as well as two distantly labeled test sets with different levels of densenessExperiment results show that our model outperforms other models on relation extraction task by considering multihop relational reasoning We also perform a qualitative analysis which shows that our model could discover more relations by reasoning more robustly as compared to baseline models Our main contributions are in twofold 1 We extend a novel graph neural network model with generated parameters to enable relational messagepassing with rich text information which could be applied to process relational reasoning on unstructured inputs such as natural languages 2 We verify our GPGNNs in the task of relation extraction from text which demonstrates its ability on multihop relational reasoning as compared to those models which extract relationships separately Moreover we also present three datasets which could help future researchers compare their models in different settings GNNs were first proposed in BIBREF5 and are trained via the AlmeidaPineda algorithm BIBREF6 Later the authors in BIBREF7 replace the AlmeidaPineda algorithm with the more generic backpropagation and demonstrate its effectiveness empirically BIBREF2 propose to apply GNNs to molecular property prediction tasks BIBREF3 shows how to use GNNs to learn classifiers on image datasets in a fewshot manner BIBREF2 study the effectiveness of messagepassing in quantum chemistry BIBREF8 apply messagepassing on a graph constructed by coreference links to answer relational questions There are relatively fewer papers discussing how to adapt GNNs to natural language tasks For example BIBREF9 propose to apply GNNs to semantic role labeling and BIBREF1 apply GNNs to knowledge base completion tasks BIBREF10 apply GNNs to relation extraction by encoding dependency trees and BIBREF11 apply GNNs to multihop question answering by encoding cooccurence and coreference relationships Although they also consider applying GNNs to natural language processing tasks they still perform messagepassing on predefined graphs BIBREF12 introduces a novel neural architecture to generate a graph based on the textual input and dynamically update the relationship during the learning process In sharp contrast this paper focuses on extracting relations from realworld relation datasets Relational reasoning has been explored in various fields For example BIBREF13 propose a simple neural network to reason the relationship of objects in a picture BIBREF14 build up a scene graph according to an image and BIBREF15 model the interaction of physical objects In this paper we focus on the relational reasoning in natural language domain Existing works BIBREF16 BIBREF17 BIBREF18 have demonstrated that neural networks are capable of capturing the pairwise relationship between entities in certain situations For example BIBREF16 is one of the earliest works that applies a simple CNN to this task and BIBREF17 further extends it with piecewise maxpooling BIBREF19 propose a multiwindow version of CNN for relation extraction BIBREF18 study an attention mechanism for relation extraction tasks BIBREF20 predict nary relations of entities in different sentences with Graph LSTMs BIBREF21 treat relations as latent variables which are capable of inducing the relations without any supervision signals BIBREF22 show that the relation path has an important role in relation extraction BIBREF23 show the effectiveness of LSTMs BIBREF24 in relation extraction BIBREF25 proposed a walkbased model to do relation extraction The most related work is BIBREF26 where the proposed model incorporates contextual relations with attention mechanism when predicting the relation of a target entity pair The drawback of existing approaches is that they could not make full use of the multihop inference patterns among multiple entity pairs and their relations within the sentence We first define the task of natural language relational reasoning Given a sequence of text with m entities it aims to reason on both the text and entities and make a prediction of the labels of the entities or entity pairs In this section we will introduce the general framework of GPGNNs GPGNNs first build a fullyconnected graph mathcal G mathcal V mathcal E where mathcal V is the set of entities and each edge vi vj in mathcal E vi vj in mathcal V corresponds to a sequence s x0ij x1ij dots xl1ij extracted from the text After that GPGNNs employ three modules including 1 encoding module 2 propagation module and 3 classification module to proceed relational reasoning as shown in Fig 2 The encoding module converts sequences into transition matrices corresponding to edges ie the parameters of the propagation module by mathcal Aijn fEx0ij Ex1ij cdots Exl1ij theta en Eq 6 where fcdot could be any model that could encode sequential data such as LSTMs GRUs CNNs Ecdot indicates an embedding function and theta en denotes the parameters of the encoding module of n th layer To encode the context of entity pairs or edges in the graph we first concatenate the position embeddings with word embeddings in the sentence Exti j xt pti j Eq 12 where xt denotes the word embedding of word xt and ptij denotes the position embedding of word position t relative to the entity pairs position i j Details of these two embeddings are introduced in the next two paragraphs After that we feed the representations of entity pairs into encoder fcdot which contains a bidirectional LSTM and a multilayer perceptron mathcal Aijn mathtt MLPnmathtt BiLSTMnEx0ij Ex1ij cdots Exl1ij Eq 13 where n denotes the index of layer cdot means reshaping a vector as a matrix mathtt BiLSTM encodes a sequence by concatenating tail hidden states of the forward LSTM and head hidden states of the backward LSTM together and mathtt MLP denotes a multilayer perceptron with nonlinear activation sigma We first map each token xt of sentence lbrace x0 x1 dots xl1rbrace to a k dimensional embedding vector xt using a word embedding matrix We in mathbb RVtimes dw where V is the size of the vocabulary Throughout this paper we stick to 50dimensional GloVe embeddings pretrained on a 6 billion corpus BIBREF27 In this work we consider a simple entity marking scheme we mark each token in the sentence as either belonging to the first entity vi the second entity vj or to neither of those Each position marker is also mapped to a dp dimensional vector by a position embedding matrix Pin mathbb R3times dp We use notation pti j to represent the position embedding for xt corresponding to entity pair vi vj The propagation module learns representations for nodes layer by layer The initial embeddings of nodes ie the representations of layer 0 are taskrelated which could be embeddings that encode features of nodes or just onehot embeddings Given representations of layer n the representations of layer n1 are calculated by mathbf hin1 sum vj in mathcal Nvi sigma mathcal Ai jnmathbf hjn Eq 8 where mathcal Nvi denotes the neighbours of node vi in graph mathcal G and sigma cdot denotes nonlinear activation function Next we use Eq 8 to propagate information among nodes where the initial embeddings of nodes and number of layers are further specified as follows Suppose we are focusing on extracting the relationship between entity vi and entity vj the initial embeddings of them are annotated as mathbf hvi0 atextsubject and hvj0 atextobject while the initial embeddings of other entities are set to all zeros We set special values for the head and tail entitys initial embeddings as a kind of flag messages which we expect to be passed through propagation Annotators atextsubject and atextobject could also carry the prior knowledge about subject entity and object entity In our experiments we generalize the idea of Gated Graph Neural Networks BIBREF7 by setting atextsubject 1 0top and atextobject 0 1top In general graphs the number of layers K is chosen to be of the order of the graph diameter so that all nodes obtain information from the entire graph In our context however since the graph is densely connected the depth is interpreted simply as giving the model more expressive power We treat K as a hyperparameter the effectiveness of which will be discussed in detail Sect The Effectiveness of the Number of Layers Generally the classification module takes node representations as inputs and outputs predictions Therefore the loss of GPGNNs could be calculated as mathcal L gmathbf h0mathcal V10 mathbf h0mathcal V11 dots mathbf h0mathcal V1K Y theta c Eq 10 where theta c denotes the parameters of the classification module K is the number of layers in propagation module and Y denotes the ground truth label The parameters in GPGNNs are trained by gradient descent methods The output module takes the embeddings of the target entity pair vi vj as input which are first converted by small |
rvivj hvi1odot hvj1top hvi2 odot hvj2top dots hviK odot hvjKtop Eq 23 where odot represents elementwise multiplication This could be used for classification small mathbb P rvi vjh t s mathtt softmaxmathtt MLPrvivj Eq 24 where rvi vjin mathcal R and mathtt MLP denotes a multilayer perceptron module We use cross entropy here as the classification loss small mathcal L sum sin S sum ine j log mathbb P rvi vj i j s Eq 25 where rvi vj denotes the relation label for entity pair vi vj and S denotes the whole corpus In practice we stack the embeddings for every target entity pairs together to infer the underlying relationship between each pair of entities We use PyTorch BIBREF28 to implement our models To make it more efficient we avoid using loopbased scalaroriented code by matrix and vector operations Relation extraction from text is a classic natural language relational reasoning task Given a sentence s x0 x1 dots xl1 a set of relations mathcal R and a set of entities in this sentence mathcal Vs lbrace v1 v2 dots vmathcal Vsrbrace where each vi consists of one or a sequence of tokens relation extraction from text is to identify the pairwise relationship rvi vjin mathcal R between each entity pair vi vj In this section we will introduce how to apply GPGNNs to relation extraction Our experiments mainly aim to 1 showing that our best models could improve the performance of relation extraction under a variety of settings 2 illustrating that how the number of layers affect the performance of our model and 3 performing a qualitative investigation to highlight the difference between our models and baseline models In both part 1 and part 2 we do three subparts of experiments i we will first show that our models could improve instancelevel relation extraction on a human annotated test set and ii then we will show that our models could also help enhance the performance of baglevel relation extraction on a distantly labeled test set and iii we also split a subset of distantly labeled test set where the number of entities and edges is large BIBREF26 have proposed a dataset with Wikipedia corpora There is a small difference between our task and theirs our task is to extract the relationship between every pair of entities in the sentence whereas their task is to extract the relationship between the given entity pair and the context entity pairs Therefore we need to modify their dataset 1 We added reversed edges if they are missing from a given triple eg if triple Earth part of Solar System exists in the sentence we add a reversed label Solar System has a member Earth to it 2 For all of the entity pairs with no relations we added NA labels to them We use the same training set for all of the experiments Based on the test set provided by BIBREF26 5 annotators are asked to label the dataset They are asked to decide whether or not the distant supervision is right for every pair of entities Only the instances accepted by all 5 annotators are incorporated into the human annotated test set There are 350 sentences and 1230 triples in this test set We further split a dense test set from the distantly labeled test set Our criteria are 1 the number of entities should be strictly larger than 2 and 2 there must be at least one circle with at least three entities in the groundtruth label of the sentence This test set could be used to test our methods performance on sentences with the complex interaction between entities There are 1350 sentences and more than 17915 triples and 7906 relational facts in this test set We select the following models for comparison the first four of which are our baseline models ContextAware RE proposed by BIBREF26 This model utilizes attention mechanism to encode the context relations for predicting target relations It was the stateoftheart models on Wikipedia dataset This baseline is implemented by ourselves based on authors public repo MultiWindow CNN BIBREF16 utilize convolutional neural networks to classify relations Different from the original version of CNN proposed in BIBREF16 our implementation follows BIBREF19 concatenates features extracted by three different window sizes 3 5 7 PCNN proposed by BIBREF17 This model divides the whole sentence into three pieces and applies maxpooling after convolution layer piecewisely For CNN and following PCNN the entity markers are the same as originally proposed in BIBREF16 BIBREF17 LSTM or GPGNN with K1 layer Bidirectional LSTM BIBREF29 could be seen as an 1layer variant of our model GPGNN with K2 or K3 layerss These models are capable of performing 2hop reasoning and 3hop reasoning respectively We select the best parameters for the validation set We select nonlinear activation functions between relu and tanh and select dn among lbrace 2 4 8 12 16rbrace We have also tried two forms of adjacent matrices tiedweights set mathcal An mathcal An1 and untiedweights Table 1 shows our best hyperparameter settings which are used in all of our experiments So far we have only talked about the way to implement sentencelevel relation extraction To evaluate our models and baseline models in baglevel we utilize a bag of sentences with given entity pair to score the relations between them BIBREF17 formalize the baglevel relation extraction as multiinstance learning Here we follow their idea and define the score function of entity pair and its corresponding relation r as a maxone setting small Er vi vj S max sin S mathbb P rvi vj i j s Eq 41 From Table 2 and 3 we can see that our best models outperform all the baseline models significantly on all three test sets These results indicate our model could successfully conduct reasoning on the fullyconnected graph with generated parameters from natural language These results also indicate that our model not only performs well on sentencelevel relation extraction but also improves on baglevel relation extraction Note that ContextAware RE also incorporates context information to predict the relation of the target entity pair however we argue that ContextAware RE only models the cooccurrence of various relations ignoring whether the context relation participates in the reasoning process of relation extraction of the target entity pair ContextAware RE may introduce more noise for it may mistakenly increase the probability of a relation with the similar topic with the context relations We will give samples to illustrate this issue in Sect Qualitative Results Case Study Another interesting observation is that our layers1 version outperforms CNN and PCNN in these three datasets One probable reason is that sentences from Wikipedia corpus are always complex which may be hard to model for CNN and PCNN Similar conclusions are also reached by BIBREF30 The number of layers represents the reasoning ability of our models A K layer version has the ability to infer K hop relations To demonstrate the effects of the number of layers we also compare our models with different numbers of layers From Table 2 and Table 3 we could see that on all three datasets 3layer version achieves the best We could also see from Fig 3 that as the number of layers grows the curves get higher and higher precision indicating considering more hops in reasoning leads to better performance However the improvement of the third layer is much smaller on the overall distantly supervised test set than the one on the dense subset This observation reveals that the reasoning mechanism could help us identify relations especially on sentences where there are more entities We could also see that on the human annotated test set 3layer version to have a greater improvement over 2layer version as compared with 2layer version over 1layer version It is probably due to the reason that baglevel relation extraction is much easier In real applications different variants could be selected for different kind of sentences or we can also ensemble the prediction from different models We leave these explorations for future work Tab 4 shows qualitative results that compare our GPGNN model and the baseline models The results show that GPGNN has the ability to infer the relationship between two entities with reasoning In the first case GPGNN implicitly learns a logic rule exists y xxrightarrowyxrightarrowzRightarrow x xrightarrowz to derive Oozham language spoken Malayalam and in the second case our model implicitly learns another logic rule exists y xxrightarrowyxrightarrowzRightarrow x xrightarrowz to find the fact BankUnited Center located in English Note that BankUnited Center located in English is even not in Wikidata but our model could identify this fact through reasoning We also find that ContextAware RE tends to predict relations with similar topics For example in the third case share boarder with and located in are both relations about territory issues Consequently ContextAware RE makes a mistake by predicting Kentucky share boarder with Ohio As we have discussed before this is due to its mechanism to model cooccurrence of multiple relations However in our model since Ohio and Johnson County have no relationship this wrong relation is not predicted We addressed the problem of utilizing GNNs to perform relational reasoning with natural languages Our proposed models GPGNNs solves the relational messagepassing task by encoding natural language as parameters and performing propagation from layer to layer Our model can also be considered as a more generic framework for graph generation problem with unstructured input other than text eg images videos audios In this work we demonstrate its effectiveness in predicting the relationship between entities in natural language and baglevel and show that by considering more hops in reasoning the performance of relation extraction could be significantly improved </s> |
<s>[INST] On the emergence of syntactic structures: quantifying and modelling duality of patterning [/INST] In a seminal paper Charles Hockett BIBREF0 identified duality of patterning as one of the core design features of human language A language exhibits duality of patterning when it is organized at two distinct levels At a first level meaningless forms typically referred to as phonemes are combined into meaningful units henceforth this property will be referred to as combinatoriality For example the English forms k a and t are combined in different ways to obtain the three words kat akt and tak respectively written cat act and tack Because the individual forms in them are meaningless these words have no relation in meaning in spite of being made of the same forms This is a very important property thanks to which all of the many words of the English lexicon can be obtained by relatively simple combinations of about forty phonemes If phonemes had individual meaning this degree of compactness would not be possible At a second level meaningful units typically referred to as morphemes are composed into larger units the meaning of which is related to the individual meaning of the composing units henceforth this property will be referred to as compositionality For example the meaning of the word boyfriend is related to the meaning of the words boy and friend which composed it The compositional level includes syntax as well For example the meaning of the sentence cats eat fishes is related to the meaning of the words cats eat and fishes In this paper for the sake of simplicity we focus exclusively on the lexicon level This has to be considered as a first step towards the comprehension of the emergence of complex structures in languages In this section we quantify the notion of duality of patterning as observed in real languages in order to provide suitable measures for the combinatoriality and compositionality We now focus on the mechanisms that could lead to the establishment of duality of patterning in a lexicon There have been a number of previous works devoted to explain the emergence of combinatoriality and compositionality A thorough review of the attempts presented in literature is far from the scope of the present paper Here we shall only focus on a few aspects which are relevant for our purposes It should be remarked that the two facets of duality of patterning have often been studied independently from each other BIBREF3 BIBREF4 BIBREF5 BIBREF6 It should also be remarked that often studies in this ares have been focused on evolutionary times scales eg BIBREF7 BIBREF8 BIBREF9 disregarding in this way the peertopeer negotiation taking place on cultural timescales in large populations In contrast there is evidence suggesting that humans are capable of evolving languages with duality of patterning in the course of only one or two generations consider for instance Nicaraguan Sign Language BIBREF10 or the emergence of Pidgins and Creole languages BIBREF11 Here we aim at explaining in an unitary framework the coemergence of combinatoriality and compositionality In addition unlike previous approaches that looked for the emergence of meaningsymbols compositional mappings out of a small bounded set of predefined symbols available to the population our approach adopts an openended set of forms and it does not rely on any predefined relations between objectsmeanings and symbols For instance we require combinatoriality to emerge out of a virtually infinite set of forms which are freely provided to a blank slate of individuals Such set can only be limited by means of selforganization through repeated language games the only purpose being that of communication In addition with our simple representation of the conceptual space modeled as a graph we do not hypothesize any predefined linguistic category or predefined meaning This choice also allows to model the effect of differently shaped conceptual spaces and of conceptual spaces that may differ from individual to individual In this paper we have investigated duality of patterning at the lexicon level We have quantified in particular the notions of combinatoriality and compositionality as observed in real languages as well as in a largescale dataset produced in the framework of a webbased word association experiment BIBREF1 We have paralleled this empirical analysis with a modeling scheme the Blending Game whose aim is that of identifying the main determinants for the emergence of duality of patterning in language We analyzed the main properties of the lexicon emerged from the Blending Game as a function of the two parameters of the model the graph connectivity plink and the memory scale tau We found that properties of the emerging lexicon related to the combinatoriality namely the words length distribution the frequency of use of the different forms and a measure for the combinatoriality itself reflect both qualitatively and quantitatively the corresponding properties as measured in human languages provided that the memory parameter tau is sufficiently high that is that a sufficiently high effort is required in order to understand and learn brand new forms Conversely the compositional properties of the lexicon are related to the parameter plink that is a measure of the level of structure of the conceptual graph For intermediate and low values of plink semantic relations between objects are more differentiated with respect to the situation of a more dense graph in which every object is related to anyone else and compositionality is enhanced In summary while the graph connectivity strongly affects the compositionality of the lexicon noise in communication strongly affects the combinatoriality of the lexicon These results are important because they demonstrate for the first time that the two sides of duality of patterning can emerge simultaneously as a consequence of a purely cultural dynamics in a simulated environment which contains meaningful relations Many directions are open for future investigations First of all to elucidate the emergence of duality of patterning at the syntax level beyond that of the lexicon In addition many different manipulations of our modelling scheme are possible One very interesting consists in relaxing the assumptions that the conceptual space of all the individuals are identical and modelled as a static graph imaging instead that the conceptual space of each individual gets continuously reshaped by the interactions among the users In this way one would realize a truly coevolution of the conceptual spaces of the individuals and of their inventories of associations between objects and words Finally it is worth mentioning how recent advances in information and communication technologies allow nowadays the realization of focused experiments also in the framework of the emergence of linguistic structures and a general trend is emerging for the adoption of webgames see for instance the recently introduced Experimental Tribe platform wwwxtribeeu as a very interesting laboratory to run experiments in the socialsciences and whenever the contribution of human beings is crucially required for research purposes This is opening tremendous opportunities to monitor the emergence of specific linguistic features and their coevolution with the structure of out conceptual spaces The authors acknowledge support from the KREYON project funded by the Templeton Foundation under contract n 51663 It is pleasure to warmly thank Bruno Galantucci with whom part of this work has been carried out </s> |
<s>[INST] TArC: Incrementally and Semi-Automatically Collecting a Tunisian Arabish Corpus [/INST] Arabish is the romanization of Arabic Dialects ADs used for informal messaging especially in social networks This writing system provides an interesting ground for linguistic research computational as well as sociolinguistic mainly due to the fact that it is a spontaneous representation of the ADs and because it is a linguistic phenomenon in constant expansion on the web Despite such potential little research has been dedicated to Tunisian Arabish TA In this paper we describe the work we carried to develop a flexible and multipurpose TA resource This will include a TA corpus together with some tools that could be useful for analyzing the corpus and for its extension with new data First of all the resource will be useful to give an overview of the TA At the same time it will be a reliable representation of the Tunisian dialect TUN evolution over the last ten years the collected texts date from 2009 to present This selection was done with the purpose to observe to what extent the TA orthographic system has evolved toward a writing convention Therefore the TArC will be suitable for phonological morphological syntactic and semantic studies both in the linguistic and the Natural Language Processing NLP domains For these reasons we decided to build a corpus which could highlight the structural characteristics of TA through different annotation levels including Part of Speech POS tags and lemmatization In particular to facilitate the match with the already existing tools and studies for the Arabic language processing we provide a transcription in Arabic characters at token level following the Conventional Orthography for Dialectal Arabic guidelines CODA CODA star BIBREF0 and taking into account the specific guidelines for TUN CODA TUN BIBREF1 Furthermore even if the translation is not the main goal of this research we have decided to provide an Italian translation of the TArCs texts Even though in the last few years ADs have received an increasing attention by the NLP community many aspects have not been studied yet and one of these is the Arabish codesystem The first reason for this lack of research is the relatively recent widespread of its use before the advent of the social media Arabish usage was basically confined to text messaging However the landscape has changed considerably and particularly thanks to the massive registration of users on Facebook since 2008 At that time in Tunisia there were still no Arabic keyboards neither for Personal Computers nor for phones so Arabicspeaking users designed TA for writing in social media Table TABREF14 A second issue that has held back the study of Arabish is its lack of a standard orthography and the informal context of use It is important to note that also the ADs lack a standard codesystem mainly because of their oral nature In recent years the scientific community has been active in producing various sets of guidelines for dialectal Arabic writing in Arabic characters CODA Conventional Orthography for Dialectal Arabic BIBREF2 The remainder of the paper is organized as follows section SECREF2 is an overview of NLP studies on TUN and TA section SECREF3 describes TUN and TA section SECREF4 presents the TArC corpus building process section SECREF5 explains preliminary experiments with a semiautomatic transcription and annotation procedure adopted for a faster and simpler construction of the TArC corpus conclusions are drawn in section SECREF6 In this section we provide an overview of work done on automatic processing of TUN and TA As briefly outlined above many studies on TUN and TA aim at solving the lack of standard orthography The first Conventional Orthography for Dialectal Arabic CODA was for Egyptian Arabic BIBREF2 and it was used by bies2014transliteration for Egyptian Arabish transliteration into Arabic script The CODA version for TUN CODA TUN was developed by DBLPconflrecZribiBMEBH14 and was used in many studies like boujelbane2015traitements Such work presents a research on automatic word recognition in TUN Narrowing down to the specific field of TA CODA TUN was used in masmoudi2015arabic to realize a TAArabic script conversion tool implemented with a rulebased approach The most extensive CODA is CODA a unified set of guidelines for 28 Arab city dialects BIBREF0 For the present research CODA is considered the most convenient guideline to follow due to its extensive applicability which will support comparative studies of corpora in different ADs As we already mentioned there are few NLP tools available for Arabish processing in comparison to the amount of NLP tools realized for Arabic Considering the lack of spelling conventions for Arabish previous effort has focused on automatic transliteration from Arabish to Arabic script eg chalabi2012romanized darwish2013arabizi and al2014automatic These three work are based on a charactertocharacter mapping model that aims at generating a range of alternative words that must then be selected through a linguistic model A different method is presented in younes2018sequence in which the authors present a sequencetosequencebased approach for TAArabic characters transliteration in both directions BIBREF3 BIBREF4 Regardless of the great number of work done on TUN automatic processing there are not a lot of TUN corpora available for free BIBREF5 To the best of our knowledge there are only five TUN corpora freely downloadable one of these is the PADIC PADIC composed of 6400 sentences in six Arabic dialects translated in Modern Standard Arabic MSA and annotated at sentence level Two other corpora are the Tunisian Dialect Corpus Interlocutor TuDiCoI Tudicoi and the Spoken Tunisian Arabic Corpus STAC stac which are both morphosyntactically annotated The first one is a spoken taskoriented dialogue corpus which gathers a set of conversations between staff and clients recorded in a railway station TuDiCoI consists of 21682 words in client turns BIBREF7 The STAC is composed of 42388 words collected from audio files downloaded from the web as TV channels and radio stations files BIBREF8 A different corpus is the TARIC Taric which contains 20 hours of TUN speech transcribed in Arabic characters BIBREF9 The last one is the TSAC Tsac containing 17k comments from Facebook manually annotated to positive and negative polarities BIBREF10 This corpus is the only one that contains TA texts as well as texts in Arabic characters As far as we know there are no available corpora of TA transcribed in Arabic characters which are also morphosyntactically annotated In order to provide an answer to the lack of resources for TA we decided to create TArC a corpus entirely dedicated to the TA writing system transcribed in CODA TUN and provided with a lemmatization level and POS tag annotation The Tunisian dialect TUN is the spoken language of Tunisian everyday life commonly referred to as addrija almmiyya or According to the traditional diatopic classification TUN belongs to the area of Maghrebi Arabic of which the other main varieties are Libyan Algerian Moroccan and the Hassnya variety of Mauritania BIBREF11 Arabish is the transposition of ADs which are mainly spoken systems into written form thus turning into a quasioral system this topic will be discussed in section SECREF12 In addition Arabish is not realized through Arabic script and consequently it is not subject to the Standard Arabic orthographic rules As a result it is possible to consider TA as a faithful written representation of the spoken TUN BIBREF12 The following list provides an excerpt of the principal features of TUN which through the TArC would be researched in depth among many others At the phonetic level some of the main characteristics of TUN and Maghrebi Arabic in general are the following 1em0pt Strong influence of the Berber substratum to which it is possible to attribute the conservative phonology of TUN consonants 1em0pt Presence of new emphatic phonemes above all r l b Realization of the voiced postalveolar affricate as fricative Overlapping of the pharyngealized voiced alveolar stop with the fricative Preservation of a full glottal stop mainly in cases of loans from Classical Arabic CA or exclamations and interjections of frequent use Loss of short vowels in open syllables Monophthongization In TUN house becomes meaning room Palatalization of Imla literally inclination In TUN the phenomenon is of medium intensity Thereby the word door becomes Metathesis Transposition of the first vowel of the word It occurs when nonconjugated verbs or names without suffix begin with the sequence CCvC where C stands for ungeminated consonant and v for short vowel When a suffix is added to this type of name or a verb of this type is conjugated the first vowel changes position giving rise to the CvCC sequence In TUN it results in he has understood she has understood or leg my leg Regarding the morphosyntactic level TUN presents 1em0pt Addition of the prefix n to first person verbal morphology in mudri imperfective Realization of passivereflexive verbs through the morpheme t prefixed to the verb as in the example the shirts of Hafsiya are not bad lit they dress Loss of gender distinction at the 2nd and 3rd persons at verbal and pronominal level Disappearance of the dual form from verbal and pronominal inflexion There is a residual of pseudodual in some words fixed in time in their dual form Loss of relative pronouns flexion and replacement with the invariable form Use of presentatives r and h with the meaning of here look as in the example in TUN r here I am asphyxiated by problems or in here you are finding it the solution hence you were lucky Presence of circumfix negation marks such as verb The last element of this structure must be omitted if there is another negation such as the Tunisian adverb never as in the structure personal pronoun suffix perfect verb This construction is used to express the concept of never having done the action in question as in the example I never imagined that Instead to deny an action pointing out that it will never repeat itself again a structure widely used is ma imperfective verb where the element within the circumfix marks is a grammaticalized element of verbal origin from CA meaning to go back to reoccur which gives the structure a sense of denied repetitiveness as in the sentence he will not come back Finally to deny the nominal phrase in TUN both the and the circumfix marks are frequently used For the negative form of the verb to be in the present circumfix marks can be combined with the personal suffix pronoun placed between the marks as in I am not Within the negation marks we can also find other types of nominal structures such as mind personal pronoun suffix which has a value equivalent to the verb be aware of as in the example I did not know As previously mentioned we consider Arabish a quasioral system With quasiorality it is intended the form of communication typical of ComputerMediated Communication CMC characterized by informal tones dependence on context lack of attention to spelling and especially the ability to create a sense of collectivity BIBREF15 TA and TUN have not a standard orthography with the exception of the CODA TUN Nevertheless TA is a spontaneous codesystem used since more than ten years and is being conventionalized by its daily usage From the table TABREF14 where the coding scheme of TA is illustrated it is possible to observe that there is no onetoone correspondence between TA and TUN characters and that often Arabish presents overlaps in the encoding possibilities The main issue is represented by the not proper representation by TA of the emphatic phones and On the other hand being TA not codified through the Arabic alphabet it can well represent the phonetic realization of TUN as shown by the following examples The Arabic alphabet is generally used for formal conversations in Modern Standard Arabic MSA the Arabic of formal situations or in that of Classical Arabic CA the Arabic of the Holy Qurn also known as The Beautiful Language Like MSA and CA also Arabic Dialects ADs can be written in the Arabic alphabet but in this case it is possible to observe a kind of hypercorrection operated by the speakers in order to respect the writing rules of MSA For example in TUN texts written in Arabic script it is possible to find a silent vowel namely an epenthetic alif written at the beginning of those words starting with the sequence CCv which is not allowed in MSA Writing TUN in Arabic script the CodeMixing or Switching in foreign language will be unnaturally reduced As described in table TABREF14 the Arabic alphabet is provided with three short vowels which correspond to the three long ones but TUN presents a wider range of vowels Indeed regarding the early presented characteristics of TUN the TA range of vowels offers better possibility to represent most of the TUN characteristics outlined in the previous subsection in particular nosep Palatalization Vowel metathesis Monophthongization In order to analyze the TA system we have built a TA Corpus based on social media data considering this as the best choice to observe the quasioral nature of the TA system The corpus collection procedure is composed of the following steps Thematic categories detection Match of categories with sets of semantically related TA keywords Texts and metadata extraction Step UNKREF20 In order to build a Corpus that was as representative as possible of the linguistic system it was considered useful to identify wide thematic categories that could represent the most common topics of daily conversations on CMC In this regard two instruments with a similar thematic organization have been employed nosep A Frequency Dictionary of Arabic BIBREF16 In particular its Thematic Vocabulary List TVL Loanword Typology Meaning List A list of 1460 meanings LTML BIBREF17 The TVL consists of 30 groups of frequent words each one represented by a thematic word The second consists of 23 groups of basic meanings sorted by representative word heading Considering that the boundaries between some categories are very blurred some categories have been merged such as Body and Health see table TABREF26 Some others have been eliminated being not relevant for the purposes of our research eg Colors Opposites Male names In the end we obtained 15 macrocategories listed in table TABREF26 Step UNKREF21 Aiming at easily detect texts and the respective seed URLs without introducing relevant query biases we decided to avoid using the category names as query keywords BIBREF18 Therefore we associated to each category a set of TA keywords belonging to the basic Tunisian vocabulary We found that a semantic category with three meanings was enough to obtain a sufficient number of keywords and URLs for each category For example to the category Family the meanings son wedding divorce have been associated in all their TA variants obtaining a set of 11 keywords table TABREF26 Step UNKREF22 We collected about 25000 words and the related metadata as first part of our corpus which are being semiautomatically transcribed into Arabic characters see next sections We planned to increase the size of the corpus at a later time Regarding the metadata we have extracted the information published by users focusing on the three types of information generally used in ethnographic studies Gender Male M and Female F Age range 1025 2535 3550 5090 City of origin In order to create our corpus we applied a wordlevel annotation This phase was preceded by some data preprocessing steps in particular tokenization Each token has been associated with its annotations and metadata table TABREF32 In order to obtain the correspondence between Arabish and Arabic morpheme transcriptions tokens were segmented into morphemes This segmentation was carried out completely manually for a first group of tokens In its final version each token is associated with a total of 11 different annotations corresponding to the number of the annotation levels we chose An excerpt of the corpus after tokens annotation is depicted in table TABREF32 For the sake of clarity in table TABREF32 we show The A column Cor indicates the tokens source code For example the code 3fE which stands for 3rab fi Europe is the forum from which the text was extracted The B column Textco is the publication date of the text The C column Par is the row index of the token in the paragraph The D column W is the index of the token in the sentence When W corresponds to a range of numbers it means that the token has been segmented in to its components specified in the rows below The E column Arabi corresponds to the token transcription in Arabish The F column Tra is the transcription into Arabic characters The G column Ita is the translation to Italian The H column Lem corresponds to the lemma The I column POS is the PartOfSpeech tag of the token The tags that have been used for the POS tagging are conform to the annotation system of Universal Dependencies The last three columns J K L contain the metadata Var Age Gen Since TA is a spontaneous orthography of TUN we considered important to adopt the CODA guidelines as a model to produce a unified lemmatization for each token column Lem in table TABREF32 In order to guarantee accurate transcription and lemmatization we annotated manually the first 6000 tokens with all the annotation levels Some annotation decisions were taken before this step with regard to specific TUN features Foreign words We transcribed the Arabish words into Arabic characters except for CodeSwitching terms In order to not interrupt the sentences continuity we decide to transcribe CodeMixing terms into Arabic script However at the end of the corpus creation process these words will be analyzed making the distinction between acclimatized loans and CodeMixing The first ones will be transcribed into Arabic characters also in Lem as shown in table TABREF33 The second ones will be lemmatized in the foreign language mostly French as shown in table TABREF34 Typographical errors Concerning typos and typical problems related to the informal writing habits in the web such as repeated characters to simulate prosodic features of the language we have not maintained all these characteristics in the transcription column Tra Logically these were neither included in Lem according to the CODA conventions as shown in table TABREF34 PhonoLexical exceptions We used the grapheme only in loanword transcription and lemmatization As can be seen in table TABREF35 the Hilalian phoneme g of the Turkish loanword gawriyya has been transcribed and lemmatized with the grapheme Glottal stop As explained in CODA TUN real initial and final glottal stops have almost disappeared in TUN They remain in some words that are treated as exceptions eg question BIBREF1 Indeed we transcribe the glottal stops only when it is usually pronounced and if it does not we do not write the glottal stops at the beginning of the word or at the end neither in the transcription nor in the lemmas Negation Marks CODA TUN proposes to keep the MSA rule of maintaining a space between the first negation mark and the verb in order to uniform CODA TUN to the first CODA BIBREF2 However as DBLPconflrecZribiBMEBH14 explains in TUN this rule does not make really sense but it should be done to preserve the consistency among the various CODA guidelines Indeed in our transcriptions we report what has been produced in Arabish following CODA TUN rules while in lemmatization we report the verb lemma At the same time we segment the negative verb in its minor parts the circumfix negation marks and the conjugated verb For the first one we describe the negative morphological structure in the Tra and Lem columns as in table TABREF36 For the second one as well as the other verbs we provide transcription and lemmatization In order to make the corpus collection easier and faster we adopted a semiautomatic procedure based on sequential neural models BIBREF19 BIBREF20 Since transcribing Arabish into Arabic is by far the most important information to study the Arabish codesystem the semiautomatic procedure concerns only transcription from Arabish to Arabic script In order to proceed we used the first group of roughly 6000 manually transcribed tokens as training and test data sets in a 10fold cross validation setting with 91 proportions for training and test respectively As we explained in the previous section French tokens were removed from the data More precisely whole sentences containing nontranscribable French tokens codeswitching were removed from the data Since at this level there is no way for predicting when a French word can be transcribed into Arabic and when it has to be left unchanged French tokens create some noise for an automatic probabilistic model After removing sentences with French tokens the data reduced to roughly 5000 tokens We chose this amount of tokens for annotation blocks in our incremental annotation procedure We note that by combining sentence paragraph and token index in the corpus whole sentences can be reconstructed However from 5000 tokens roughly 300 sentences could be reconstructed which are far too few to be used for training a neural model Instead since tokens are transcribed at morpheme level we split Arabish tokens into characters and Arabic tokens into morphemes and we treated each token itself as a sequence Our model learns thus to map Arabish characters into Arabic morphemes The 10fold cross validation with this setting gave a tokenlevel accuracy of roughly 71 This result is not satisfactory on an absolute scale however it is more than encouraging taking into account the small size of our data This result means that less than 3 tokens on average out of 10 must be corrected to increase the size of our corpus With this model we automatically transcribed into Arabic morphemes roughly 5000 additional tokens corresponding to the second annotation block This can be manually annotated in at least 75 days but thanks to the automatic annotation accuracy it was manually corrected into 3 days The accuracy of the model on the annotation of the second block was roughly 70 which corresponds to the accuracy on the test set The manuallycorrected additional tokens were added to the training data of our neural model and a new block was automatically annotated and manually corrected Both accuracy on the test set and on the annotation block remained at around 70 This is because the block added to the training data was significantly different from the previous and from the third Adding the third block to the training data and annotating a fourth block with the new trained model gave in contrast an accuracy of roughly 80 This incremental semiautomatic transcription procedure is in progress for the remaining blocks but it is clear that it will make the corpus annotation increasingly easier and faster as the amount of training data will grow up Our goal concerning transcription is to have the 25000 tokens mentioned in section SECREF19 annotated automatically and manually corrected These data will constitute our gold annotated data and they will be used to automatically transcribe further data In this paper we presented TArC the first Tunisian Arabish Corpus annotated with morphosyntactic information We discussed the decisions taken in order to highlight the phonological and morphological features of TUN through the TA corpus structure Concerning the building process we have shown the steps undertaken and our effort intended to make the corpus as representative as possible of TA We therefore described the texts collection stage as well as the corpus building and the semiautomatic procedure adopted for transcribing TA into Arabic script taking into account CODA and CODA TUN guidelines At the present stage of research TArC consists of 25000 tokens however our work is in progress and for future research we plan to enforce the semiautomatic transcription which has already shown encouraging results accuracy 70 We also intend to realize a semiautomatic TA PartOfSpeech tagger Thus we aim to develop tools for TA processing and in so doing we strive to complete the annotation levels transcription POS tag lemmatization semiautomatically in order to increase the size of the corpus making it available for linguistic analyses on TA and TUN lrec2020Wxamplekc </s> |
<s>[INST] Speakers account for asymmetries in visual perspective so listeners don't have to [/INST] Our success as a social species depends on our ability to understand and be understood by different communicative partners across different contexts Theory of mindthe ability to represent and reason about others mental statesis considered to be the key mechanism that supports such contextsensitivity in our everyday social interactions Being able to reason about what others see want and think allows us to make more accurate predictions about their future behavior in different contexts and adjust our own behaviors accordingly BIBREF0 Over the past two decades however there has been sustained debate over the extent to which adults actually make of use theory of mind in communication On one hand accounts of language use in the tradition of BIBREF1 and BIBREF2 BIBREF3 implicitly assume a fundamental and pervasive role for theory of mind mechanisms The meaning of an utterance is established against a backdrop of inference intention and common ground knowledge that is taken to be shared by both parties BIBREF4 BIBREF5 This view of adults as natural mindreaders is consistent with extensive evidence from the psycholinguistics literature for instance we spontaneously calibrate our referential expressions to our intended audiences BIBREF6 and make use of partnerspecific history BIBREF7 BIBREF8 Yet in other cases the evidence appears to be more consistent with a more egocentric or reflexively mindblind view of language processing BIBREF9 BIBREF10 BIBREF11 BIBREF12 Under this view although adults have the ability to deploy theory of mind it is effortful and costly to do so Thus people may initially anchor on their own perspective and only adjust to account for other perspectives when a problem arises and when sufficient cognitive resources are available Much of this debate has centered around the influential directormatcher paradigm a variant of classic reference games BIBREF13 where a confederate speaker gives participants instructions about how to move objects around a grid By introducing an asymmetry in visual accesscertain cells of the grid are covered such that participants can see objects that the speaker cannot eg Fig 1 BIBREF14 designed a task to expose cases where participants listeners either succeed or fail to take into account what the speaker sees In particular BIBREF14 argued that if listeners were reliably using theory of mind they would only consider mutually visible objects as possible referents For instance on one trial a roll of Scotch tape was mutually visible and a cassette tape was hidden from the speakers view When the confederate speaker produced an ambiguous utterance tape participants should still interpret it as a reference to the mutually visible object even if it fits the hidden object better the idea is that a speaker who cannot see an object wouldnt possibly be referring to it While the visual asymmetries constructed by BIBREF14 may provide the starkest test of this hypothesis variations on this basic paradigm have manipulated other dimensions of nonvisual knowledge asymmetry including those based on spoken information BIBREF15 BIBREF16 spatial cues BIBREF17 BIBREF18 private pretraining on object labels BIBREF19 cultural background BIBREF20 and other taskrelevant information BIBREF21 BIBREF22 Questions about speaker perspectivetaking during production have similarly been explored by reversing the direction of the asymmetry so the speaker has private knowledge that the listener does not and examining whether this private information leaks into their utterances BIBREF23 BIBREF24 BIBREF25 BIBREF26 BIBREF27 BIBREF28 BIBREF29 Numerous rounds of reinterpretation and methodological criticism have puzzled over seemingly contradictory findings in this sprawling body of work some studies find strong evidence consistent with an egocentric viewlisteners initially consider and even attempt to move such objectswhile others find that information from the speakers perspective is integrated from the very earliest stages of processing BIBREF30 BIBREF31 Recent computational models have begun to unify this literature under a probabilistic framework For instance some models assume that listeners BIBREF32 and speakers BIBREF33 simultaneously integrate their own perspective with that of their partner leading to behavior that lies between purely egocentric and purely guided by common ground These constraintbased models BIBREF34 BIBREF35 introduce a probabilistic weighting parameter between the two domains of reference and show that an intermediate weighting explains the gradient of communicative behavior better than a purely egocentric or purely perspectiveadopting model Yet these constraintbased models leave open a key puzzle for rational models of language use why do people use the proportion they do in a given context In other words while different factors influencing the weighting have been proposed no formal mechanism yet explains why incorporating egocentric knowledge would be adaptive when full common ground is available We argue in this paper for a resource rational account of perspectivetaking in communication BIBREF36 BIBREF37 In a communicative interaction with another agent the participants share the goal of successfully being understood while minimizing joint effort BIBREF38 BIBREF4 If theory of mind use is indeed effortful and cognitively demanding to some degree BIBREF39 BIBREF40 BIBREF41 then the question for a rational agent is when and how to best allocate its cognitive resources to achieve its goals This sets up a natural division of labor between the speaker and listener in how the effort should be shared which in principle admits many solutions Rather than being guided by rigid heuristics individuals may rationally and adaptively calibrate their perspectivetaking based on expectations about their partners likely behavior Critically these expectations may themselves be derived from a targeted use of theory of mind Here we explore one particular source of expectations derived from Gricean expectations of informativity which have been largely neglected by prior work in the perspectivetaking literature BIBREF42 Just as making sense of an agents physical behaviors requires a broad accurate mental model of how the agents visual access beliefs and intentions translate into motor plans BIBREF43 BIBREF44 making sense of an agents linguistic behaviors depends on an accurate model of what a speaker would say or what a listener would understand in different situations BIBREF45 BIBREF46 BIBREF47 BIBREF48 BIBREF49 From this perspective theory of mind use not only incorporates peoples mental models of a partners knowledge or visual access but also their inferences about how their partner would behave in a communicative context To instantiate this account we elaborate the family of probabilistic weighting models by proposing that theory of mind use under knowledge asymmetries not only involves integrating a partners knowledge but also recursive reasoning about how they will likely produce or interpret utterances in particular communicative contexts BIBREF50 The Gricean notion of cooperativity BIBREF3 BIBREF4 refers to the idea that speakers try to avoid saying things that are confusing or unnecessarily complicated given the current context and that listeners expect this For instance imagine trying to help someone spot your dog at a busy dog park It may be literally correct to call it a dog but as a cooperative speaker you would understand that the listener would have trouble disambiguating the referent from many other dogs Likewise the listener would reasonably expect you to say something more informative than dog in this context You may therefore prefer to use a more specific or informative expressions like the little terrier with the blue collar BIBREF7 BIBREF51 Critically you might do so even when you happen to see only one dog at the moment but know there are likely to be other dogs from the listeners point of view In the presence of uncertainty about their partners visual context a cooperative speaker may tend toward additional specificity Now what level of specificity is pragmatically appropriate in the particular directormatcher task used by BIBREF52 This task requires the speaker to generate a description such that a listener can identify the correct object among distractors even though several cells are hidden from the speakers view eg Fig 2 bottom It is thus highly salient to the speaker that there are hidden objects she cannot see but her partner can Gricean reasoning as realized by recent formal models BIBREF46 BIBREF47 BIBREF49 predicts that a speaker in this context will compensate for her uncertainty about the listeners visual context by increasing the informativity of her utterance beyond what she would produce in a completely shared context See Appendix A for a formal model of pragmatic reasoning in this situation and a mathematical derivation of the informativity prediction The directormatcher task used by BIBREF52 is therefore not only challenging for the listener it also requires a sophisticated use of theory of mind vis a vis pragmatic reasoning on the part of the speaker to understand that the listener may expect her to increase the informativity of her utterance While extensive prior work has examined how speakers adjust their utterances or not depending on their own private information it remains untested how they pragmatically compensate for their lack of access to the listeners private information by flexibly modifying their informativity In the following experiments we ask whether people as speakers show such sensitivity to their own uncertainty about their partners visual access Furthermore we suggest that such sensitivity and the listeners expectations about this sensitivity can help us understand why listeners in prior work eg in the DirectorMatcher task made frequent errors A listeners rational reliance on the speakers informativity which allows them to efficiently neglect the speakers visual access under cognitive load may backfire and lead to errors when paired with a confederate speaker who violates Gricean expectations First we directly test our models prediction by manipulating the presence and absence of occlusions in a simple interactive naturallanguage reference game Second we conduct a replication of BIBREF52 with an additional unscripted condition to evaluate whether the scripted referring expressions used by confederate speakers in prior work accord with what a real speaker would say in the same interactive context BIBREF54 BIBREF55 BIBREF56 If confederate speakers were using scripts that were uncooperative and underinformative compared to what speakers naturally say this previously unrecognized violation of Gricean expectations may have implications for the rational basis of listener errors Our main goal here is to directly establish the adaptive pragmatic behavior of speakers It is important to note that our broader claim about the source of listener errors emerges from establishing the plausibility of a resourcerational basis for perspectiveneglect showing that speakers are adaptive Exp1 and listeners indeed make more errors when speakers violate their expectations Exp2 causally manipulating listener expectations is beyond the scope of the current work We return to the broader implications and predictions of this account in the discussion How does an unscripted speaker change her communicative behavior when there is uncertainty about exactly what her partner can see To address this question empirically we randomly assigned participants to the roles of speaker and listener and paired them over the web to play an interactive communication task BIBREF57 We recruited 102 pairs of participants from Amazon Mechanical Turk and randomly assigned speaker and listener roles After we removed 7 games that disconnected partway through and 12 additional games according to our preregistered exclusion criteria due to being nonnative English speakers reporting confusion about the instructions or clearly violating the instructions we were left with a sample of 83 full games On each trial both players were presented with a 3times 3 grid containing objects One target object was privately highlighted for the speaker who freely typed a message into a chat box in order to get the listener to click the intended referent The objects varied along three discrete features shape texture and color each of which took four discrete values 64 possible objects See Appendix Fig 7 for a screenshot of the interface There were four types of trials forming a withinpair 2 times 2 factorial design We manipulated the presence or absence of occlusions and the closeness of shared distractors to the target see Fig 2 On shared trials all objects were seen by both participants but on hidden trials two cells of the grid were covered with occluders curtains such that only the listener could see the contents of the cell On far trials the target is the only object with a particular shape on close trials there is also a shared distractor with the targets shape differing only in color or texture In order to make it clear to the speaker that there could really be objects behind the occluders without providing a statistical cue to their identity or quantity on any particular trial we randomized the total number of distractors in the grid on each trial between 2 and 4 as well as the number of those distractors covered by curtains 1 or 2 If there were only two distractors we did not allow both of them to be covered there was always at least one visible distractor Each trial type appeared 6 times for a total of 24 trials and the sequence of trials was pseudorandomized such that no trial type appeared more than twice in each block of eight trials Participants were instructed to use visual properties of the objects rather than spatial locations in the grid Finally we collected mousetracking data analogous to the eyetracking common in referential paradigms We asked the matcher to wait until the director sent a message when the message was received the matcher clicked a small circle in the center of the grid to show the objects and proceed with the trial We recorded at 100Hz from the matchers mouse in the decision window after this click until the point where they clicked and started to drag one of the objects While we did not intend to analyze these data for Exp 1 we anticipated using it in our second experiment below and wanted to use the same procedure across experiments for consistency We recruited 200 pairs of participants from Amazon Mechanical Turk 58 pairs were unable to complete the game due to a server outage Following our preregistered exclusion criteria we removed 24 games who reported confusion violated our instructions or made multiple errors on filler items as well as 2 additional games containing nonnative English speakers This left 116 pairs in our final sample The materials and procedure were chosen to be as faithful as possible to those reported in BIBREF52 while allowing for interaction over the web Directors used a chat box to communicate where to move a privately cued target object in a 4 times 4 grid see Fig 1 The listener then attempted to click and drag the intended object In each of 8 objects sets mostly containing filler objects one target belonged to a critical pair of objects such as a visible cassette tape and a hidden roll of tape that could both plausibly be called the tape We displayed instructions to the director as a series of arrows pointing from some object to a neighboring unoccupied cell Trials were blocked into eight sets of objects with four instructions each As in BIBREF52 we collected baseline performance by replacing the hidden alternative eg a roll of tape with a filler object that did not fit the critical instruction eg a battery in half of the critical pairs The assignment of items to conditions was randomized across participants and the order of conditions was randomized under the constraint that the same condition would not be used on more than two consecutive items All object sets object placements and corresponding instruction sets were fixed across participants In case of a listener error the object was placed back in its original position both participants were given feedback and asked to try again We used a betweensubject design to compare the scripted labels used by confederate directors in prior work against what participants naturally say in the same role For participants assigned to the director role in the scripted condition a prescripted message using the precise wording from BIBREF52 automatically appeared in their chat box on half of trials the 8 critical trials as well as nearly half of the fillers Hence the scripted condition served as a direct replication To maintain an interactive environment the director could freely produce referring expressions on the remainder of filler trials In the unscripted condition directors were unrestricted and free to send whatever messages they deemed appropriate on all trials In addition to analyzing messages sent through the chat box and errors made by matchers listeners we collected mousetracking data in analogy to the eyetracking common in these paradigms Our primary measure of speaker behavior is the length in words of naturally produced referring expressions sent through the chat box We tested differences in speaker behavior across conditions using a mixedeffect regression of context and occlusion on the number of words produced with maximal random effect structure containing intercept slopes and interaction First as a baseline we examined the simple effect of close vs far contexts in trials with no occlusions We found that speakers used significantly more words on average when there was a distractor in context that shared the same shape as the target b 056 t 51 p 0001 see Fig 3 A This replicates the findings of prior studies in experimental pragmatics BIBREF7 BIBREF58 Next we turn to the simple effect of occlusion in far contexts which are most similar to the displays used in the directormatcher task which we adopt in Exp 2 BIBREF52 Speakers used 125 additional words on average when they knew their partner could potentially see additional objects t 75 p 0001 Finally we found a significant interaction b 049 t 38 p 0001 where the effect of occlusion was larger in far contexts likely indicating a ceiling on the level of informativity required to individuate objects in our simple stimulus space What are these additional words used for As a secondary analysis we annotated each utterance based on which of the three object features were mentioned shape texture color Because speakers nearly always mentioned shape eg star triangle as the head noun of their referring expression regardless of context sim 99 of trials differences in utterance length across conditions must be due to differentially mentioning the other two features color and texture To test this observation we ran separate mixedeffect logistic regressions for color and texture predicting mention from context due to convergence issues the maximum random effect structure supported by our data contains only speakerlevel intercepts and slopes for the occlusion effect We found simple effects of occlusion in far contexts for both features b 133 z 29 p 0004 for color b 48 z 64 p 0001 for texture see Fig 3 B In other words in displays like the left column of Fig 2 where the target was the only star speakers were somewhat more likely to produce the stars colorand much more likely to produce its texturewhen there were occlusions present even though shape alone is sufficient to disambiguate the target from visible distractors in both cases Finally we note that listener errors were rare 88 of listeners made only one or fewer errors out of 24 trials and there was no significant difference in error rates across the four conditions chi 23 123 p 074 We test the connections between contextsensitive speaker behavior and listener error rates more explicitly in Exp 2 While our behavioral results provide qualitative support for a Gricean account over an egocentric account formalizing these two accounts in computational models allows a stronger test of our hypothesis by generating graded quantitative predictions We formalized both accounts in the probabilistic Rational Speech Act RSA framework BIBREF47 BIBREF46 BIBREF49 BIBREF59 BIBREF48 which has successfully captured a variety of other pragmatic phenomena In this framework speakers are decisiontheoretic agents attempting to softmaximize a utility function balancing parsimony ie a preference for shorter simpler utterances with informativeness ie the likelihood of an imagined listener agent having the intended interpretation The only difference between the two accounts in the RSA framework is how the asymmetry in visual access is handled the occlusionblind speaker simply assumes that the listener sees the same objects as she herself sees while the occlusionsensitive speaker represents uncertainty over her partners visual context In particular she assumes a probability distribution over the possible objects that might be hidden behind the occlusions and attempts to be informative on average The two models have the same four free parameters a speaker optimality parameter controlling the softmax temperature and three parameters controlling the costs of producing the features of shape color and texture see Appendix B for details We conducted a Bayesian data analysis to infer these parameters conditioning on our empirical data and computed a Bayes Factor to compare the models We found extremely strong support for the occlusionsensitive model relative to the occlusionblind model BF 22 times 10209 see Appendix Fig 8 for likelihoods To examine the pattern of behavior of each model we computed the posterior predictive on the expected number of features mentioned in each trial type of our design While the occlusionblind speaker model successfully captured the simple effect of close vs far contexts it failed to account for behavior in the presence of occlusions The occlusionsensitive model on the other hand accurately accounted for the full pattern of results see Fig 4 Finally we examined parameter posteriors for the occlusionsensitive model see Appendix Fig 9 the inferred production cost for texture was significantly higher than that for the other features reflecting the asymmetry in production of texture relative to color Experiment 1 directly tested the hypothesis that speakers increase their specificity in contexts with asymmetry in visual access We found that speakers are not only contextsensitive in choosing referring expressions that distinguish target from distractors in a shared context but are occlusionsensitive adaptively compensating for uncertainty Critically this resulted in systematic differences in behavior across the occlusion conditions that are difficult to explain under an egocentric theory in the presence of occlusions speakers were spontaneously willing to spend additional time and keystrokes to give further information beyond what they produce in the corresponding unoccluded contexts even though that information is equally redundant given the visible objects in their display These results validate our prediction that speakers appropriately increase their level of specificity in contexts containing occlusions In Experiment 2 we recruited pairs of participants for an online interactive version of the original directormatcher task BIBREF52 which used occluded contexts to demonstrate limits on visual perspectivetaking for the listener Given the results of Exp 1 we predicted that participants in the director role ie speakers would naturally provide more informative referring expressions than the confederate directors used in prior work This would suggest that the confederate directors in prior work were pragmatically infelicitous violating listeners expectations This violation of listeners cooperative expectations may have led to detrimental consequences for listener performance Our scripted condition successfully replicated the results of BIBREF52 with even stronger effects listeners incorrectly moved the hidden object on approximately 50 of critical trials However on unscripted trials the listener error rate dropped by more than half p1 051 p2 020 chi 21 43 p 0001 Fig 5 A While we found substantial heterogeneity in error rates across object sets just 3 of the 8 object sets accounted for the vast majority of remaining unscripted errors see Appendix Fig 10 listeners in the unscripted condition made fewer errors for nearly every critical item In a maximal logistic model with fixed effect of condition random intercepts for each dyad and random slopes and intercepts for each object set we found a significant difference in error rates across conditions z 26 p 0008 Even if participants in the unscripted condition make fewer actual errors they may still be considering the hidden object just as often on trials where they go on to make correct responses As a proxy for the eyetracking analyses reported by BIBREF52 we conducted a mousetracking analysis We computed the mean logged amount of time spent hovering over the hidden distractor and found a significant interaction between condition and the contents of the hidden cell t 359 p 0001 Fig 5 B in a mixedeffects regression using dyadlevel and objectlevel random intercepts and slopes for the difference from baseline Listeners in the scripted condition spent more time hovering over the hidden cell when it contained a confusable distractor relative to baseline again replicating BIBREF52 In the unscripted condition there was no difference from baseline Next we test whether these improvements in listener performance in the unscripted condition are accompanied by more informative speaker behavior than the scripted utterances allowed The simplest measure of speaker informativity is the raw number of words used in referring expressions Compared to the scripted referring expressions speakers in the unscripted condition used significantly more words to refer to critical objects b 054 t 26 p0019 in a mixedeffects regression on difference scores using a fixed intercept and random intercepts for object and dyads However this is a coarse measure for example the shorter Pyrex glass may be more specific than large measuring glass despite using fewer words For a more direct measure we extracted the referring expressions generated by speakers in all critical trials and standardized spelling and grammar yielding 122 unique labels after including scripted utterances We then recruited an independent sample of 20 judges on Amazon Mechanical Turk to rate how well each label fit the target and hidden distractor objects on a slider from strongly disagree meaning the label doesnt match the object at all to strongly agree meaning the label matches the object perfectly They were shown objects in the context of the full grid with no occlusions such that they could feasibly judge spatial or relative references like bottom block We excluded 4 judges for guessing with response times 1s Interrater reliability was relatively high with intraclass correlation coefficient of 054 95 CI 047 061 We computed the informativity of an utterance the tape as the difference in how well it was judged to apply to the target the cassette tape relative to the distractor object the roll of tape Our primary measure of interest is the difference in informativity across scripted and unscripted utterances We found that speakers in the unscripted condition systematically produced more informative utterances than the scripted utterances d 05 95 bootstrapped CI 027 077 p 001 see Appendix C for details Scripted labels fit the hidden distractor just as well or better than the target but unscripted labels fit the target better and the hidden distractor much worse see Fig 6 A In other words the scripted labels used in BIBREF52 were less informative than expressions speakers would normally produce to refer to the same object in this context These results strongly suggest that the speakers informativity influences listener accuracy In support of this hypothesis we found a strong negative correlation between informativity and error rates across items and conditions listeners make fewer errors when utterances are a better fit for the target relative to the distractor rho 081 bootstrapped 95 CI 09 07 Fig 6 B This result suggests that listener behavior is driven by an expectation of speaker informativity listeners interpret utterances proportionally to how well they fit objects in context Are human adults expert mindreaders or fundamentally egocentric The longstanding debate over the role of theory of mind in communication has largely centered around whether listeners or speakers with private information consider their partners perspective BIBREF30 BIBREF16 Our work presents a more nuanced picture of how a speaker and a listener use theory of mind to modulate their pragmatic expectations The Gricean cooperative principle emphasizes a natural division of labor in how the joint effort of being cooperative is shared BIBREF4 BIBREF60 It can be asymmetric when one partner is expected to and able to take on more complex reasoning than the other in the form of visual perspectivetaking pragmatic inference or avoiding further exchanges of clarification and repair One such case is when the speaker has uncertainty over what the listener can see as in the directormatcher task Our Rational Speech Act RSA formalization of cooperative reasoning in this context predicts that speakers directors naturally increase the informativity of their referring expressions to hedge against the increased risk of misunderstanding Exp 1 presents direct evidence in support of this hypothesis Importantly when the director speaker is expected to be appropriately informative communication can be successful even when the matcher listener does not reciprocate the effort If visual perspectivetaking is effortful and cognitively demanding BIBREF39 the matcher will actually minimize joint effort by not taking the directors visual perspective This suggests a less egocentric explanation of when and why listeners neglect the speakers visual perspective they do so when they expect the speaker to disambiguate referents sufficiently While adaptive in most natural communicative contexts such neglect might backfire and lead to errors when the speaker inexplicably violates this expectation From this point of view the failure of listener theory of mind in these tasks is not really a failure instead it suggests that both speakers and listeners may use theory of mind to know when and how much they should expect others to be cooperative and informative and subsequently allocate their resources accordingly BIBREF36 Exp 2 is consistent with this hypothesis when directors used underinformative scripted instructions taken from prior work listeners made significantly more errors than when speakers were allowed to provide referring expressions at their natural level of informativity and speaker informativeness strongly modulated listener error rates Our work adds to the growing literature on the debate over the role of pragmatics in the directormatcher task A recent study questions the communicative nature of the task itself by showing that selective attention alone is sufficient for successful performance on this task and that listeners become suspicious of the directors visual access when the director shows unexpectedly high levels of specificity in their referring expressions BIBREF61 Our results further sbolster the argument that pragmatic reasoning about appropriate levels of informativity is an integral aspect of theory of mind use in the directormatcher task and communication more generally Note however that in BIBREF61 participants became suspicious while in our study participants overtrusted the speaker to be informative a more detailed look at differences between experimental paradigms as well as further experimental work is necessary to better understand why participants had different expectations about the speaker Prior work also suggests that although speakers tend to be overinformative in their referring expressions BIBREF62 a number of situational factors eg perceptual saliency of referents can modulate this tendency Our work hints at an additional principle that guides speaker informativity speakers maintain uncertainty about the listeners visual context and their ability to disambiguate the referent in that context Additionally while our model builds on probabilistic models weighting different perspectives BIBREF32 BIBREF33 we leave the formal integration of resourcerational recursive reasoning mechanisms with perspectiveweighting mechanisms for future work While BIBREF33 focused on cases where the speaker has private information unknown to the listener our model focuses on the reverse case how speakers behave when they know that the listener has additional private information BIBREF52 Furthermore whether the allocation of resources and ensuing perspective neglect is a fixed strategy or one that adjusts dynamically remains an open question given sufficient evidence of an unusually underinformative partner listeners may realize that vigilance about which objects are occluded yields a more effective strategy for the immediate interaction An important direction for future work is to directly explore listener adaptability in adjusting their use of visual perspectivetaking as a function of Gricean expectations for a given partner BIBREF63 BIBREF64 In sum our findings suggest that language use is welladapted to contexts of uncertainty and knowledge asymmetry The pragmatic use of theory of mind to establish division of labor is also critical for other forms of social cooperation including pedagogy BIBREF65 and teambased problem solving BIBREF66 BIBREF67 Enriching our notion of theory of mind use to encompass these pragmatic expectations not only expectations about what our partner knows or desires may shed new light on the flexibility of social interaction more broadly This manuscript is based in part on work presented at the 38th Annual Conference of the Cognitive Science Society The first author is supported by a NSF Graduate Research Fellowship and a Stanford Graduate Fellowship A pilot of expt 2 was originally conducted under the supervision of Michael Frank with early input from Desmond Ong Were grateful to Boaz Keysar for providing select materials for our replication This work was supported by ONR grants N000141310788 and N0001413 10287 and a James S McDonnell Foundation Scholar Award to NDG RXDH and NDG initially formulated project RXDH performed experiments analyzed data and performed computational modeling All authors planned experiments interpreted result and wrote the paper Unless otherwise mentioned all analyses and materials were preregistered at httpsosfioqwkmp Code and materials for reproducing the experiment as well as all data and analysis scripts are open and available at httpsgithubcomhawkrobepragmaticsofperspectivetaking Our experiments are motivated by the Gricean observation that speakers should attempt to be more informative when there is an asymmetry in visual access such that their partner sees something they do not In this appendix we formalize this scenario in a computational model of communication as recursive social reasoning and prove that the predicted increase in informativity qualitatively holds under fairly unrestrictive conditions Following recent advances in the Rational Speech Act RSA framework we define a speaker as a decisiontheoretic agent who must choose a referring expression u to refer to a target object o in a context C by softmaximizing a utility function U Su o C propto exp lbrace alpha Uu o Crbrace Definition The basic utility used in RSA models captures the informativeness of each utterance to an imagined literal listener agent L who is attempting to select the target object from alternatives in context Ubasicu o C log Lo u C This informationtheoretic expression measures how certain the listener becomes about the intended object after hearing the utterance The literal listener is assumed to update their beliefs about the target object according to Bayesian inference conditioning on the literal meaning of the utterance being true of it Lo u C propto mathcal Lou Po where normalization takes place over objects o in C and mathcal L represents the lexical semantics of u If u is true of o then mathcal Lou 1 otherwise mathcal Lou 0 This basic setup assumes that the speaker reasons about a listener sharing the same context C in common ground How should it be extended to handle asymmetries in visual access between the speaker and listener where the speaker has uncertainty over the possible distractors behind the occlusions In the RSA framework speaker uncertainty is represented straightforwardly by a prior over the state of the world for example BIBREF48 examined a case where the speaker has limited perceptual access to the objects they are describing For the directormatcher task we construct this prior by positing a space of alternative objects mathcal O introducing uncertainty Poh over which object oh in mathcal O if any is hidden behind an occlusion and marginalizing over these alternatives when reasoning about the listener Definition This gives us a utility for conditions of asymmetries in visual access Uasymu o C sum oh in mathcal O Poh log Lo u C cup oh where C denotes the set of objects in context that the speaker perceives We define specificity extensionally in the sense that if u0 is more specific than u1 then the objects for which u0 is true is a subset of the objects for which u1 is true Definition Utterance u0 is said to be more specific than u1 iff mathcal Lu0 oh le mathcal Lu1 oh forall oh in mathcal O and there exists a subset of objects mathcal O subset mathcal O such that sum o in mathcal O Po 0 and mathcal Lu0 o mathcal Lu1 o for o in mathcal O We now show that the recursive reasoning model predicts that speakers should prefer more informative utterances in contexts with occlusions In other words that the asymmetry utility leads to a preference for more specific referring expressions than the basic utility Theorem If u0 is more specific than u1 then the following holds for any target ot and shared context C |
fracSasymu0 ot CSasymu1 ot C |
fracSbasicu0 ot CSbasicu1 ot C |
Since Su0ot CSu1ot C exp alpha cdot Uu0 ot C Uu1otC it is sufficient to show |
Uasymu0 o C Uasymu1 o C |
Ubasicu0 o C Ubasicu1 o C |
We first break apart the sum on the lefthand side Uasymu0 ot C Uasymu1 ot C |
displaystyle sum oh in mathcal O pohleftlog Lo u0 Ccup oh log Lou1 C cup ohright |
displaystyle sum oin mathcal O po log fracLotu0 Ccup oLotu1 Ccup o |
displaystyle sum ohin mathcal Osetminus mathcal O poh log fracLotu0 Ccup ohLotu1 Ccup oh Eq 9 By the definition of more specific and because we defined oin mathcal O to be precisely the subset of objects for which mathcal Lu0 o mathcal Lu1 o for objects oh in the complementary set mathcal O setminus mathcal O we have mathcal Lu0 oh mathcal Lu1 oh Therefore for Lot ui C cup oh Lot ui C giving us log fracLot u0 CLotu1 Csum ohin mathcal Osetminus mathcal Opoh For the ratio in 9 we can substitute the definition of the listener L and simplify |
beginarrayrcl |
displaystyle fracLotu0 Ccup oLotu1 Ccup o |
displaystyle fracmathcal Lot u0 sum oin C cup omathcal Lou1mathcal Lot u1 sum oin C cup omathcal Lou0 5cm |
displaystyle fracmathcal Lot u0 sum oin Cmathcal Lou1 mathcal Lo u1mathcal Lot u1 sum oin Cmathcal Lou0 mathcal Lo u0 5cm |
displaystyle fracmathcal Lot u0 sum oin Cmathcal Lou1mathcal Lot u1 sum oin Cmathcal Lou0 5cm |