text
stringlengths
0
164k
Xrhaty1hatYbt nonumber Eq 29 where bt is a baseline linear regression model used to reduce the variance of the gradients during training bt predicts the future reward and is trained by minimizing mean squared error The input to this predictor is the agent hidden state mathbf ht however we do not backpropagate the error to mathbf ht We refer the interested reader to BIBREF13 and BIBREF4 for more details Rather than starting from a state given by a random policy we initialise the agent with a policy learnt by pretraining with the negative loglikelihood objective BIBREF4 BIBREF32 The reinforcement learning objective is applied gradually in combination with the loglikelihood objective on each target block subsequence Recall from Section EncoderDecoder Base Generator that our document is segmented into blocks of equal size during training which we denote as MaxBlock When training begins only the last mho tokens are predicted by the agent while for the first textsmall textsc MaxBlockmho we still use the negative loglikelihood objective The number of tokens mho predicted by the agent is incremented by mho units every 2 epochs We set mho 3 and the training ends when textsmall textsc MaxBlockmho 0 Since we evaluate the models predictions at the block level the reward function is also evaluated at the block level We compared the performance of an encoderdecoder model trained with the standard negative loglikelihood method ED against a model trained with multitask learning ED mathrm MTL and reinforcement learning ED mathrm RL We also included a template baseline system Templ in our evaluation experiments The template generator used handwritten rules to realise propertyvalue pairs As an approximation for content selection we obtained the 50 more frequent property names from the training set and manually defined content ordering rules with the following criteria We ordered personal life properties eg birthdate or occupation based on their most common order of mention in the Wikipedia abstracts Profession dependent properties eg position or genre were assigned an equal ordering but posterior to the personal properties We manually lexicalised properties into single sentence templates to be concatenated to produce the final text The template for the property position and example verbalisation for the propertyvalue position defender of the entity zanetti are NAME played as POSITION and Zanetti played as defender respectively In this paper we focused on the task of bootstrapping generators from largescale datasets consisting of DBPedia facts and related Wikipedia biography abstracts We proposed to equip standard encoderdecoder models with an additional content selection mechanism based on multiinstance learning and developed two training regimes one based on multitask learning and the other on reinforcement learning Overall we find that the proposed content selection mechanism improves the accuracy and fluency of the generated texts In the future it would be interesting to investigate a more sophisticated representation of the input BIBREF34 It would also make sense for the model to decode hierarchically taking sequences of words and sentences into account BIBREF41 BIBREF42 We thank the NAACL reviewers for their constructive feedback We also thank Xingxing Zhang Li Dong and Stefanos Angelidis for useful discussions about implementation details We gratefully acknowledge the financial support of the European Research Council award number 681760 </s>
<s>[INST] Competency Questions and SPARQL-OWL Queries Dataset and Analysis [/INST] Within the field of ontology engineering Competency Questions CQs BIBREF0 are natural language questions outlining the scope of knowledge represented by an ontology They represent functional requirements in the sense that the developed ontology or an ontologybased information system should be able to answer them hence contain all the relevant knowledge For example a CQ may be What are the implementations of C45 algorithm indicating that the ontology needs to contain classes such as Algorithm and C45 as subclass of Algorithm and something about implementations such that the answer to the CQ will be nonempty CQs are a part of several ontology engineering methodologies yet the actual publication of CQs for the available ontologies is rather scarce Even more scarce is the publication of the CQs respective formalisation in terms of eg SPARQL queries This suggests CQs are not used widely as intended We hypothezise that it may be due to the lack of common practices templates automation and user tools that would support CQ formulation formalisation execution and general management or it is still a fully manual process For instance even if one has specified CQs there is no automatic way to translate it to say a SPARQLOWL BIBREF1 query for validation and verification and not even a systematic manual way either There have been few attempts to analyse CQs Ren et al BIBREF2 analysed CQs and their patterns to determine CQ archetypes as tried BIBREF3 Those patterns have a limited coverage however for they are based on CQ sets of at most two ontologies Pizza and Software which thus may contain domain bias CQ author bias and prejudiced patterns as the Pizza CQs were created after the ontology As simple example of the latter issue one could create a CQ Which pizza has hot as spiciness that neatly fits with Pizzas hasSpiciness data property or a more natural phrase Which pizzas are hot that is fully agnostic of how it is represented in the ontology be it with a data property object property or a class More generally it suggests that Ren et als CQ patterns formulated alike Which CE1 OPE CE2 may not be appropriate as CQ pattern as it presupposes which kind of element it would be in an ontology The manual process and free form formulation of CQs by domain experts also runs onto problems that some turn out not translatable into a test over the ontology for various reasons For instance the CQ How can I get problems with X fixed of the Software Ontology cannot be answered by a declarative specification that the ontology is or take the CQ for the DMOP ontology BIBREF4 Given a data mining taskdata set which of the valid or applicable workflowsalgorithms will yield optimal results or at least better results than the others assuming that the question may deal with an arbitrary not predefined upfront dataset this CQ may only be answered via performing data mining experiments and not by the ontology itself Therefore without a clear guidelines of what kind of CQs may be meaningfully expressed and used as requirement specification for an ontologys content their uptake and usage likely will remain limited This paper aims to contribute to addressing the engineering shortcomings of using CQs in ontology development To clear up the CQ muddle and trying to understand the relation between CQs and the queries over the ontology to test the CQs on an ontology we gather analyse and publicly release a larger set of CQs and their translations to SPARQLOWL for several ontologies in different domains developed by different groups For the analysis in particular we seek to address the following research questions A total of 234 CQs for 5 ontologies have been collected and translated into SPARQLOWL queries and made available as a data resource We analysed them in two principal ways The first stage focused on a linguistic analysis of the natural language text itself ie a lexicosyntactic analysis without any presuppositions of ontology elements and a subsequent step of semantic analysis This revealed 17 CQ patterns at the natural language layer While a few patterns occur in multiple CQ sets there are also patterns unique to a CQ set supporting the expectation that a broad sampling is required to obtain a more representative set of patterns The second phase consists of designing SPARQLOWL queries for all CQs where possible and examining the signature of the queries We found 46 query signatures resulting from the collected 131 SPARQLOWL queries The third step consists of the analysis of the relation between the CQ patterns and the SPARQLOWL query signatures This is as hypothesised a INLINEFORM0 INLINEFORM1 relation or one CQ pattern may be realised by more than one SPARQLOWL query and there may be more than one CQ pattern for a SPARQLOWL query signature The remainder of the paper is structured as follows We first discuss related works on CQs and CQ patterns in Section SECREF2 Section SECREF3 is devoted to the linguistic analysis of CQs and Section SECREF4 to the generation and analysis of the SPARQLOWL queries We discuss and return to the research questions in Section SECREF5 and conclude in Section SECREF6 The data is available from a Git repository at httpsgithubcomCQ2SPARQLOWLDataset The aim of the analysis of the CQs is to examine whether there are some popular linguistic structures that can be reused to specify requirements for and validate new and existing ontologies This section describes the collection of the materials the methods and subsequently the results of the CQ analysis We describe and motivate the materials first and then proceed to the methods and motivations thereof There are multiple ontologies available over internet with competency questions provided but since the focus of our research is on SPARQLOWL queries we selected only those ontologies with CQs stated against ontology schema TBox As a result we selected 5 ontologies with 234 competency questions in total Table TABREF8 summarizes our dataset size and source of each ontology The Software Ontology SWO BIBREF5 is included because its set of CQs is of substantial size and it was part of Ren et als set of analysed CQs The CQ sets of DemCare BIBREF8 and OntoDT BIBREF9 were included because they were available CQs for the Stuff BIBREF6 and African Wildlife AWO BIBREF7 ontologies were added to the set because the ontologies were developed by one of the authors therewith facilitating indepth domain analysis if needed they cover other topics and are of a different type a tutorial ontology AWO and a core ontology Stuff thus contributing to maximising diversity in source selection In this section we carry out and examine the translation of CQs to a form that can be evaluated against an ontology As first preliminary observation we observe that an OWL ontology can be serialized as an RDFXML graph BIBREF10 and thus queried using SPARQL Query Language BIBREF11 In its base form SPARQL is basically a pattern matching language and as such does not provide any reasoning capabilities however it is possible to introduce these by using SPARQL Entailment Regimes BIBREF12 In particular we employ OWL 2 Direct Semantics Entailment Regime Intuitively it allows us to construct a SPARQL query such that its WHERE clause contains OWL axioms possibly with some of its IRIs and literals replaced by SPARQL variables The results of the execution of such a query are all the variable mappings such that the axioms obtained by applying these mapping to the axioms in the query are entailed by the queried ontology SPARQL being a query language for RDF employs Turtle syntax BIBREF13 to express Basic Graph Patterns BGPs and this convention is kept also for expressing OWL axioms ie their RDF representation is used BIBREF10 This is consistent with how the only available implementation behaves BIBREF1 BIBREF14 The second preliminary comment is that we note that unlike Dennis et al BIBREF15 s claim CQs do not have to have specific presuppositions other than vocabulary but queries do for it is the queries that are specific to the ontology and the modelling style used and other modelling decisions made We can make this distinction here because of the separation of concerns between the linguistics of the CQs on the one hand and the queries and ontology how it it realised on the other hand rather than having the two combined as in BIBREF3 BIBREF2 BIBREF15 This work was partly supported by the Polish National Science Center Grant No 201413DST602076 Jedrzej Potoniec acknowledges support from the grant 0991DSPB0627 </s>
<s>[INST] Question Answering via Integer Programming over Semi-Structured Knowledge [/INST] Answering questions posed in natural language is a fundamental AI task with a large number of impressive QA systems built over the years Todays Internet search engines for instance can successfully retrieve factoid style answers to many natural language queries by efficiently searching the Web Information Retrieval IR systems work under the assumption that answers to many questions of interest are often explicitly stated somewhere BIBREF0 and all one needs in principle is access to a sufficiently large corpus Similarly statistical correlation based methods such as those using Pointwise Mutual Information or PMI BIBREF1 work under the assumption that many questions can be answered by looking for words that tend to cooccur with the question words in a large corpus While both of these approaches help identify correct answers they are not suitable for questions requiring reasoning such as chaining together multiple facts in order to arrive at a conclusion Arguably such reasoning is a cornerstone of human intelligence and is a key ability evaluated by standardized science exams given to students For example consider a question from the NY Regents 4th Grade Science Test We would like a QA system that even if the answer is not explicitly stated in a document can combine basic scientific and geographic facts to answer the question eg New York is in the north hemisphere the longest day occurs during the summer solstice and the summer solstice in the north hemisphere occurs in June hence the answer is June Figure 1 illustrates how our system approaches this with the highlighted support graph representing its line of reasoning Further we would like the system to be robust under simple perturbations such as changing New York to New Zealand in the southern hemisphere or changing an incorrect answer option to an irrelevant word such as last that happens to have high cooccurrence with the question text To this end we propose a structured reasoning system called TableILP that operates over a semistructured knowledge base derived from text and answers questions by chaining multiple pieces of information and combining parallel evidence The knowledge base consists of tables each of which is a collection of instances of an n ary relation defined over natural language phrases Eg as illustrated in Figure 1 a simple table with schema country hemisphere might contain the instance United States Northern while a ternary table with schema hemisphere orbital event month might contain North Summer Solstice June TableILP treats lexical constituents of the question Q as well as cells of potentially relevant tables T as nodes in a large graph mathcal GQT and attempts to find a subgraph G of mathcal GQT that best supports an answer option The notion of best support is captured via a number of structural and semantic constraints and preferences which are conveniently expressed in the Integer Linear Programming ILP formalism We then use an offtheshelf ILP optimization engine called SCIP BIBREF3 to determine the best supported answer for Q Following a recently proposed AI challenge BIBREF4 we evaluate TableILP on unseen elementaryschool science questions from standardized tests Specifically we consider a challenge set BIBREF2 consisting of all nondiagram multiple choice questions from 6 years of NY Regents 4th grade science exams In contrast to a stateoftheart structured inference method BIBREF5 for this task which used Markov Logic Networks MLNs BIBREF6 TableILP achieves a significantly 14 absolute higher test score This suggests that a combination of a rich and finegrained constraint language namely ILP even with a publicly available solver is more effective in practice than various MLN formulations of the task Further while the scalability of the MLN formulations was limited to very few typically one or two selected science rules at a time our approach easily scales to hundreds of relevant scientific facts It also complements the kind of questions amenable to IR and PMI techniques as is evidenced by the fact that a combination trained using simple Logistic Regression BIBREF2 of TableILP with IR and PMI results in a significant 10 absolute boost in the score compared to IR alone Our ablation study suggests that combining facts from multiple tables or multiple rows within a table plays an important role in TableILPs performance We also show that TableILP benefits from the table structure by comparing it with an IR system using the same knowledge the table rows but expressed as simple sentences TableILP scores significantly 10 higher Finally we demonstrate that our approach is robust to a simple perturbation of incorrect answer options while the simple perturbation results in a relative drop of 20 and 33 in the performance of IR and PMI methods respectively it affects TableILPs performance by only 12 BIBREF2 aristo2016combining proposed an ensemble approach for the science QA task demonstrating the effectiveness of a combination of information retrieval statistical association rulebased reasoning and an ILP solver operating on semistructured knowledge Our ILP system extends their model with additional constraints and preferences eg semantic relation matching substantially improving QA performance A number of systems have been developed for answering factoid questions with short answers eg What is the capital of France using document collections or databases eg Freebase BIBREF7 NELL BIBREF8 for example BIBREF9 BIBREF10 BIBREF11 BIBREF12 BIBREF13 BIBREF14 BIBREF15 However many science questions have answers that are not explicitly stated in text and instead require combining information together Conversely while there are AI systems for formal scientific reasoning eg BIBREF16 BIBREF17 they require questions to be posed in logic or restricted English Our goal here is a system that operates between these two extremes able to combine information while still operating with natural language The task of Recognizing Textual Entailment RTE BIBREF18 BIBREF19 is also closely related as QA can be cast as entailment Does corpus entail questionanswer BIBREF20 However RTE has primarily focused on the task of linguistic equivalence and has not addressed questions where some form of scientific reasoning is required Recent work on Natural Logic BIBREF21 BIBREF22 has extended RTE to account for the logical structure within language Our work can be seen as going one step further to add a layer of structured reasoning on top of this in fact we use an RTE engine as a basic subroutine for comparing individual table cells in our ILP formulation ILP based discrete optimization has been successful in several NLP tasks BIBREF23 BIBREF24 BIBREF25 BIBREF26 BIBREF27 While our ILP formulation also operates on natural language text our focus is on the use of a specific semistructured table representation for QA BIBREF28 cohen2000joins studied tables with natural language text requiring soft matching with a focus on efficiently computing the top few candidates given a database query In contrast our system given a natural language question implicitly seeks to generate a query that produces the most supported answer We begin with our knowledge representation formalism followed by our treatment of QA as an optimal subgraph selection problem over such knowledge and then briefly describe our ILP model for subgraph selection We use semistructured knowledge represented in the form of n ary predicates over natural language text BIBREF2 Formally a k column table in the knowledge base is a predicate rx1 x2 ldots xk over strings where each string is a typically short natural language phrase The column headers capture the table schema akin to a relational database Each row in the table corresponds to an instance of this predicate For example a simple countryhemisphere table represents the binary predicate rtextctryhemsch with instances such as Australia Southern and Canada Northern Since table content is specified in natural language the same entity is often represented differently in different tables posing an additional inference challenge Although techniques for constructing this knowledge base are outside the scope of this paper we briefly mention them Tables were constructed using a mixture of manual and semiautomatic techniques First the table schemas were manually defined based on the syllabus study guides and training questions Tables were then populated both manually and semiautomatically using IKE BIBREF29 a tablebuilding tool that performs interactive bootstrapped relation extraction over a corpus of science text In addition to augment these tables with the broad knowledge present in study guides that doesnt always fit the manually defined table schemas we ran an Open IE BIBREF30 patternbased subjectverbobject SVO extractor from BIBREF31 clark2014akbc over several science texts to populate threecolumn Open IE tables Methods for further automating table construction are under development We treat question answering as the task of pairing the question with an answer such that this pair has the best support in the knowledge base measured in terms of the strength of a support graph defined as follows Given a multiple choice question Q and tables T we can define a labeled undirected graph mathcal GQT over nodes mathcal V and edges mathcal E as follows We first split Q into lexical constituents eg nonstopword tokens or chunks mathbf q lbrace qell rbrace and answer options mathbf a lbrace amrbrace For each table Ti we consider its cells mathbf t lbrace tijkrbrace as well as column headers T0 The nodes of T1 are then T2 For presentation purposes we will equate a graph node with the lexical entity it represents such as a table cell or a question constituent The undirected edges of T3 are T4 excluding edges both whose endpoints are within a single table Informally an edge denotes soft equality between a question or answer node and a table node or between two table nodes To account for lexical variability eg that tool and instrument are essentially equivalent and generalization eg that a dog is an animal we replace string equality with a phraselevel entailment or similarity function w mathcal E rightarrow 01 that labels each edge e in mathcal E with an associated score we We use entailment scores directional from mathbf q to mathbf t cup mathbf h and from mathbf t cup mathbf h to mathbf a and similarity scores symmetric between two nodes in mathbf t In the special case of column headers across two tables the score is manually set to either 0 or 1 indicating whether this corresponds to a meaningful join Intuitively we would like the support graph for an answer option to be connected and to include nodes from the question the answer option and at least one table Since each table row represents a coherent piece of information but cells within a row do not have any edges in mathcal GQT the same holds also for cells and the corresponding column headers we use the notion of an augmented subgraph to capture the underlying table structure Let G VE be a subgraph of mathcal GQT The augmented subgraph G is formed by adding to G edges v1v2 such that v1 and v2 are in V and they correspond to either the same row possibly the header row of a table in T or to a cell and the corresponding column header Definition 1 A support graph G GQTam for a question Q tables T and an answer option am is a subgraph VE of mathcal GQT with the following basic properties V cap mathbf a lbrace amrbrace V cap mathbf q ne phi V cap mathbf t ne phi we 0 for all e in E if e in E cap mathbf t times mathbf t then there exists a corresponding eprime in E cap mathbf h times mathbf h involving the same columns and the augmented subgraph G is connected A support graph thus connects the question constituents to a unique answer option through table cells and optionally table headers corresponding to the aligned cells A given question and tables give rise to a large number of possible support graphs and the role of the inference process will be to choose the best one under a notion of desirable support graphs developed next We do this through a number of additional structural and semantic properties the more properties the support graph satisfies the more desirable it is We model the above support graph search for QA as an ILP optimization problem ie as maximizing a linear objective function over a finite set of variables subject to a set of linear inequality constraints A summary of the model is given below We note that the ILP objective and constraints arent tied to the particular domain of evaluation they represent general properties that capture what constitutes a well supported answer for a given question Table 1 summarizes the notation for various elements of the problem such as tijk for cell jk of table i All core variables in the ILP model are binary ie have domain lbrace 01rbrace For each element the model has a unary variable capturing whether this element is part of the support graph G ie it is active For instance row rij is active if at least one cell in row j of table i is in G The model also has pairwise alignment variables capturing edges of mathcal GQT The alignment variable for an edge jk0 in jk1 is associated with the corresponding weight jk2 and captures whether jk3 is included in jk4 To improve efficiency we create a pairwise variable for jk5 only if jk6 is larger than a certain threshold These unary and pairwise variables are then used to define various types of constraints and preferences as discussed next To make the definitions clearer we introduce all basic variables used in our optimization in Table 2 and will use them later to define constraints explicitly We use the notation xleftright to refer to a unary variable parameterized by a single element of the optimization and yleft right to refer to a pairwise variable parameterized by a pair of elements Unary variables represent the presence of a specific element as a node in the support graph G For example xleftTiright 1 if and only if the table Ti is active in G Similarly ylefttijk qell right 1 if and only if the corresponding edge is present in G which we alternatively refer to as an alignment between cell jk of table i and the yleft right0 th constituent of the question As previously mentioned in practice we do not create all possible pairwise variables Instead we choose the pairs alignment score we exceeds a preset threshold For example we create ylefttijk tiprime jprime kprime right only if wtijk tiprime jprime kprime ge textsc MinCellCellAlignment The objective function is a weighted linear sum over all variables instantiated for a given question answering problem A small set of auxiliary variables is defined for linearizing complicated constraints Constraints are a significant part of our model used for imposing the desired behavior on the support graph Due to lack of space we discuss only a representative subset here Some constraints relate variables to each other For example unary variables are defined through constraints that relate them to the corresponding pairwise variables For instance for active row variable xleftrijright we ensure that it is 1 if and only if at least one cell in row j is active
xleftrijright ge ylefttijk right forall tijk in mathcal Rij forall i j k
where mathcal Rij is collection of pairwise variables with one end in row j of table i In the remainder of this section we outline some of the important characteristics we expect in our model and provide details of a few illustrative constraints Consider the following question Which characteristic helps a fox find food A sense of smell B thick fur C long tail D pointed teeth In order to answer such lookupstyle questions we generally seek a row with the highest aggregate alignment to question constituents We achieve this by incorporating the questiontable alignment variables with the alignment scores we as coefficients and the active question constituents variable with a constant coefficient in the objective function Since any additional questiontable edge with a positive entailment score even to irrelevant tables in the support graph would result in an increase in the score we disallow tables with alignments only to the question or only to a choice and add a small penalty for every table used in order to reduce noise in the support graph We also limit the maximum number of alignments of a question constituent and table cells in order to prevent one constituent or cell from having a large influence on the objective function and thereby the solution
sum qell in mathcal Ql yleft qell right le textsc MaxAlignmentsPerQCons forall l
where mathcal Ql is the set of all pairwise variables with one end in the question constituent ell For certain questions evidence needs to be combined from multiple rows of a table For example Sleet rain snow and hail are forms of A erosion B evaporation C groundwater D precipitation To answer this question we need to combine evidence from multiple table entries from the weather terms table term type namely sleet precipitation rain precipitation snow precipitation and hail precipitation To achieve this we allow multiple active rows in the support graph Similar to the basic constraints we limit the maximum number of active rows per table and add a penalty for every active row to ensure only relevant rows are considered for reasoning
sum j xleftrijright le textsc MaxRowsPerTable forall i
To encourage only coherent parallel evidence within a single table we limit our support graph to always use the same columns across multiple rows within a table ie every active row has the active cells corresponding to the same set of columns Questions requiring chaining of evidence from multiple tables such as the example in Figure 1 are typically the most challenging in this domain Chaining can be viewed as performing a join between two tables We introduce alignments between cells across columns in pairs of tables to allow for chaining of evidence To help minimize potential noise introduced by chaining irrelevant facts we add a penalty for every intertable alignment and also rely on the 01 weights of headertoheader edges to ensure only semantically meaningful table joins are considered Our constraints so far have only looked at the content of the table cells or the structure of the support graph without explicitly considering the semantics of the table schema By using alignments between the question and column headers ie type information we exploit the table schema to prefer alignments to columns relevant to the topic of the question In particular for questions of the form which X ldots we prefer answers that directly entail X or are connected to cells that entail X However this is not sufficient for questions such as What is one way to change water from a liquid to a solid A decrease the temperature B increase the temperature C decrease the mass D increase the mass Even if we select the correct table say rtextchangeinitfinc i f that describes the initial and final states for a phase change event both choice A and choice B would have the exact same score in the presence of table rows increase temperature solid liquid and decrease temperature liquid solid The table however does have the initial vs final state structure To capture this semantic structure we annotate pairs of columns within certain tables with the semantic relationship present between them In this example we would annotate the phase change table with the relations changeFrom c i changeTo c f and fromTo i f Given such semantic relations for table schemas we can now impose a preference towards questiontable alignments that respect these relations We associate each semantic relation with a set of linguistic patterns describing how it might be expressed in natural language TableILP then uses these patterns to spot possible mentions of the relations in the question Q We then add the soft constraint that for every pair of active columns in a table with an annotated semantic relation aligned to a pair of question constituents there should be a valid expression of that relation in Q between those constituents In our example we would match the relation fromToliquid solid in the table to liquid to a solid in the question via the pattern X to a Y associated with fromToXY and thereby prefer aligning with the correct row decrease temperature liquid solid We compare our approach to three existing methods demonstrating that it outperforms the best previous structured approach BIBREF5 and produces a statistically significant improvement when used in combination with IRbased methods BIBREF2 For evaluations we use a 2core 25 GHz Amazon EC2 linux machine with 16 GB RAM Question Set We use the same question set as BIBREF2 aristo2016combining which consists of all nondiagram multiplechoice questions from 12 years of the NY Regents 4th Grade Science exams The set is split into 108 development questions and 129 hidden test questions based on the year they appeared in 6 years each All numbers reported below are for the hidden test set except for question perturbation experiments which relied on the 108 development questions Test scores are reported as percentages For each question a solver gets a score of 1 if it chooses the correct answer and 1k if it reports a k way tie that includes the correct answer On the 129 test questions a score difference of 9 or 7 is statistically significant at the 95 or 90 resp confidence interval based on the binomial exact test BIBREF33 Corpora We work with three knowledge corpora TableILP our approach Given a question Q we select the top 7 tables from the Table Corpus using the the standard TFIDF score of Q with tables treated as bagofwords documents For each selected table we choose the 20 rows that overlap with Q the most This filtering improves efficiency and reduces noise We then generate an ILP and solve it using the open source SCIP engine BIBREF3 returning the active answer option am from the optimal solution To check for ties we disable am resolve the ILP and compare the score of the secondbest answer if any with that of am MLN Solver structured inference baseline We consider the current stateoftheart structured reasoning method developed for this specific task by BIBREF5 aristo2015mln We compare against their best performing system namely Praline which uses Markov Logic Networks BIBREF6 to a align lexical elements of the question with probabilistic firstorder science rules and b to control inference We use the entire set of 47000 science rules from their original work which were also derived from same domaintargeted sources as the ones used in our Sentence Corpus IR Solver information retrieval baseline We use the IR baseline by BIBREF2 aristo2016combining which selects the answer option that has the best matching sentence in a corpus Specifically for each answer option ai the IR solver sends q ai as a query to a search engine we use Lucene on the Sentence Corpus and returns the search engines score for the top retrieved sentence s where s must have at least one nonstopword overlap with q and at least one with ai The option with the highest Lucene score is returned as the answer PMI Solver statistical cooccurrence baseline We use the PMIbased approach by BIBREF2 aristo2016combining which selects the answer option that most frequently cooccurs with the question words in a corpus Specifically it extracts unigrams bigrams trigrams and skipbigrams from the question and each answer option For a pair xy of n grams their pointwise mutual information PMI BIBREF1 in the corpus is defined as log fracpxypxpy where pxy is the cooccurrence frequency of x and y within some window in the corpus The solver returns the answer option that has the largest average PMI in the Web Corpus calculated over all pairs of question n grams and answer option n grams We first compare the accuracy of our approach against the previous structured MLNbased reasoning solver We also compare against IRtables an IR solver using table rows expressed as sentences thus embodying an unstructured approach operating on the same knowledge as TableILP As Table 3 shows among the two structured inference approaches TableILP outperforms the MLN baseline by 14 The preliminary ILP system reported by BIBREF2 aristo2016combining achieves only a score of 438 on this question set Further given the same semistructured knowledge ie the Table Corpus TableILP is substantially 10 better at exploiting the structure than the IRtables baseline which as mentioned above uses the same data expressed as sentences While their overall score is similar TableILP and IRbased methods clearly approach QA very differently To assess whether TableILP adds any new capabilities we considered the 50 out of 129 questions incorrectly answered by PMI solver ignoring tied scores On these unseen but arguably more difficult questions TableILP answered 27 questions correctly achieving a score of 54 compared to the random chance of 25 for 4way multiplechoice questions Results with IR solver were similar TableILP scored 2475 on the 52 questions incorrectly answered by IR ie 476 accuracy This analysis highlights the complementary strengths of these solvers Following BIBREF2 aristo2016combining we create an ensemble of TableILP IR and PMI solvers combining their answer predictions using a simple Logistic Regression model trained on the development set This model uses 4 features derived from each solvers score for each answer option and 11 features derived from TableILPs support graphs Table 4 shows the results with the final combination at 69 representing a significant improvement over individual solvers Table 5 summarizes various ILP and support graph statistics for TableILP averaged across all test questions The optimization model has around 50 highlevel constraints which result on average in around 4000 inequalities over 1000 variables Model creation which includes computing pairwise entailment scores using WordNet takes 19 seconds on average per question and the resulting ILP is solved by the SCIP engine in 21 seconds total for all four options using around 1300 LP iterations for each option Thus TableILP takes only 4 seconds to answer a question using multiple rows across multiple tables typically 140 rows in total as compared to 17 seconds needed by the MLN solver for reasoning with four rules one per answer option While the final support graph on this question set relies mostly on a single table to answer the question it generally combines information from more than two rows 23 on average for reasoning This suggests parallel evidence is more frequently used on this dataset than evidence chaining To quantify the importance of various components of our system we performed several ablation experiments summarized in Table 6 and described next No Multiple Row Inference We modify the ILP constraints to limit inference to a single row and hence a single table thereby disallowing parallel evidence and evidence chaining Section ILP Formulation This drops the performance by 105 highlighting the importance of being able to combine evidence from multiple rows which would correspond to multiple sentences in a corpus from one or more tables No Relation matching To assess the importance of considering the semantics of the table we remove the requirement of matching the semantic relation present between columns of a table with its lexicalization in the question Section ILP Formulation The 6 drop indicates TableILP relies strongly on the table semantics to ensure creating meaningful inferential chains No Open IE tables To evaluate the impact of relatively unstructured knowledge from a large corpus we removed the tables containing Open IE extractions Section Conclusion The 9 drop in the score shows that this knowledge is important and TableILP is able to exploit it even though it has a very simple triple structure This opens up the possibility of extending our approach to triples extracted from larger knowledge bases No Lexical Entailment Finally we test the effect of changing the alignment metric w Section Conclusion from WordNet based scores to a simple asymmetric wordoverlap measured as mathit scoreT H fracT cap HH Relying on just wordmatching results in an 11 drop which is consistent with our knowledge often being defined in terms of generalities One desirable property of QA systems is robustness to simple variations of a question especially when a variation would make the question arguably easier for humans To assess this we consider a simple automated way to perturb each 4way multiplechoice question 1 query Microsofts Bing search engine wwwbingcom with the question text and obtain the text snippet of the top 2000 hits 2 create a list of strings by chunking and tokenizing the results 3 remove stop words and special characters as well as any words or their lemma appearing in the question 4 sort the remaining strings based on their frequency and 5 replace the three incorrect answer options in the question with the most frequently occurring strings thereby generating a new question For instance In New York State the longest period of daylight occurs during which month A eastern B June C history D years As in this example the perturbations italicized are often not even of the correct type typically making them much easier for humans They however still remain difficult for solvers For each of the 108 development questions we generate 10 new perturbed questions using the 30 most frequently occurring words in step 5 above While this approach can introduce new answer options that should be considered correct as well only 3 of the questions in a random sample exhibited this behavior Table 7 shows the performance of various solvers on the resulting 1080 perturbed questions As one might expect the PMI approach suffers the most at a 33 relative drop TableILPs score drops as well since answer type matching isnt perfect but only by 12 attesting to its higher resilience to simple question variation Answering real science questions is a challenging task because they are posed in natural language require extensive domain knowledge and often require combining multiple facts together We presented TableILP a system that can answer such questions using a semistructured knowledge base We treat QA as a subgraph selection problem and then formulate this as an ILP optimization Most importantly this formulation allows multiple semiformally expressed facts to be combined to answer questions a capability outside the scope of IRbased QA systems In our experiments this approach significantly outperforms both the previous best attempt at structured reasoning for this task and an IR engine provided with the same knowledge It also significantly boosts performance when combined with unstructured methods IR and PMI These results suggest that the approach is both viable and promising for natural language question answering DK is in part supported by AI2 and Google The authors would like to thank Christos Christodoulopoulos Sujay Jauhar Sam Skjonsberg and the Aristo Team at AI2 for invaluable discussions and insights </s>
<s>[INST] Comparing morphological complexity of Spanish, Otomi and Nahuatl [/INST] Morphology deals with the internal structure of words BIBREF0 BIBREF1 Languages of the world have different word production processes Morphological richness vary from language to language depending on their linguistic typology In natural language processing NLP taking into account the morphological complexity inherent to each language could be important for improving or adapting the existing methods since the amount of semantic and grammatical information encoded at the word level may vary significantly from language to language Conceptualizing and quantifying linguistic complexity is not an easy task many quantitative and qualitative dimensions must be taken into account BIBREF2 On one hand we can try to answer what is complexity in a language and which mechanisms express it on the other hand we can try to find out if there is a language with more complex phenomena phonological morphological syntactical than other and how can we measure it miestamo2008grammatical distinguishes between two types of complexity the absolute which defines complexity in terms of the number of parts of a system and the relative which is related to the cost and difficulty faced by language users Some authors focuses in the absolute approach since it is less subjective Another common complexity distinction is between global and particular Global complexity characterizes entire languages eg as easy or difficult to learn BIBREF2 while particular complexity refers only to a level of the whole language for example phonological complexity morphological complexity syntactical complexity We focus on morphological complexity Many definitions of this term have been proposed BIBREF3 BIBREF4 BIBREF5 From the computational linguistics perspective there has been a special interest in corpus based approaches to quantify it ie methods that estimate the morphological complexity of a language directly from the production of morphological instances over a corpus This type of approach usually represents a relatively easy and reproducible way to quantify complexity without the strict need of linguistic annotated data The underlying intuition of corpus based methods is that morphological complexity depends on the morphological system of a language like its inflectional and derivational processes A very productive system will produce a lot of different word forms This morphological richness can be captured with several statistical measures eg information theory measures BIBREF6 or type token relationships For example p 9bybee2010language affirms that the token frequency of certain items in constructions ie words as well as the range of types determines representation of the construction as well as its productivity In this work we are interested in using corpus based approaches however we would like to quantify the complexity not only by the type and token distributions over a corpus but also by taking into account other important dimension the predictability of a morph sequence BIBREF7 This is a preliminary work that takes as a case of study the distant languages Otomi Nahuatl and Spanish The general idea is to use parallel corpora typetoken relationship and some NLP strategies for measuring the predictability in statistical language models Additionally most of the previous works do not analyze how the complexity changes when different types of morphological normalization procedures are applied to a language eg lemmatization stemming morphological segmentation This information could be useful for linguistic analysis and for measuring the impact of different word form normalization tools depending of the language In this work we analyze how the typetoken relationship changes using different types of morphological normalization techniques The typetoken relationship TTR is the relationship that exists between the number of distinct words types and the total word count tokens within a text This measure has been used for several purposes eg as an indicator of vocabulary richness and style of an author BIBREF8 BIBREF9 information flow of a text BIBREF10 and it has also been used in child language acquisition psychiatry and literary studies BIBREF11 BIBREF12 TTR has proven to be a simple yet effective way to quantify the morphological complexity of a language This is why it has been used to estimate morphological complexity using relatively small corpora BIBREF13 It has also shown a high correlation with other types of complexity measures like entropy and paradigmbased approaches that are based on typological information databases BIBREF14 It is important to notice that the value of TTR is affected by the type and length of the texts However one natural way to make TTRs comparable between languages is to use a parallel corpus since the same meaning and functions are more or less expressed in the two languages When TTR is measured over a parallel corpus it provides a useful way to compare typological and morphological characteristics of languages kelih2010type works with parallel texts of the Slavic language family to analyze morphological and typological features of the languages ie he uses TTR for comparing the morphological productivity and the degree of syntheticity and analycity between the languages Along the same line mayer2014extraction automatically extract typological features of the languages eg morphological synthesis degree by using TTR There exist several models that have been developed to examine the relationship between the types and tokens within a text BIBREF15 The most common one is the ratio fractypestokens and it is the one that we use in this work In NLP statistical language models are a useful tool for calculating the probability of any sequence of words in a language These models need a corpus as training data they are usually based on ngrams and more recently in neural representations of words Information theory based measures can be used to estimate the predictiveness of these models ie perplexity and entropy Perplexity is a common measure for the complexity of ngrams models in NLP BIBREF16 Perplexity is based in Shannons entropy BIBREF17 as the perplexity of a model mu is defined by the equation 2Hmu where Hmu es the entropy of the model or random variable Shannons entropy had been used for measuring complexity of different systems In linguistics entropy is commonly used to measure the complexity of morphological systems BIBREF6 BIBREF18 BIBREF19 Higher values of perplexity and entropy mean less predictability Perplexity depends on how the model is represented this includes the size of the data In this work we compare two different models for calculating the entropy and perplexity a typical bigram model adapted to a morph level BIBREF16 and our proposal based on using the word as a context instead of ngrams We rely in parallel corpora to compare the measures across languages since the same meaning and functions are shared in the two languages This model takes into consideration bigrams BIBREF16 as context for determining the joint probabilities of the substrings Here the bigrams are sequences of two morphs in the text whether they belong to the same word or not This is a typical statistical language model but instead of using sequences of words we use morphological segmented texts In addition we use a Laplacian or add one smoothing for the conditional probabilities BIBREF20 The word level representation takes the whole word as context for the determination of joint probabilities Therefore the frequency of cooccurrence is different from zero only if the subword units morphs are part of the same word For example if xby is a word with a prefix x and a suffix y the cooccurrence of x with b will be different from zero as both morphs are part of the word xby Similarly the cooccurrence of y with b will be different from zero Conversely if two morphs are substrings of different words its cooccurrence will be zero To calculate the conditional probabilities we use and add one estimator defined as pxy fracfrxy 1 frxy V Eq 5 Where V is the number of types and frcdot is the frequency of cooccurrence function We work with two language pairs that are spoken in the same country Mexico but they are typologically distant languages Spanish IndoEuropeanNahuatl UtoAztecan and SpanishOtomi OtoManguean Both Nahuatl and Otomi are lowresource languages that face scarcity of digital parallel and monolingual corpora Nahuatl is an indigenous language with agglutinative and polysynthethic morphological phenomena It can agglutinate many different prefixes and suffixes to build complex words Spanish also has rich morphology but it mainly uses suffixes and it can have a fusional behavior where morphemes can be fused or overlaid into a single one that encodes several grammatical meanings Regarding to Otomi its morphology also has a fusional tendency and it is headmarking Otomi morphology is usually considered quite complex BIBREF21 as it exhibits different phenomena like stem alternation inflectional class changes and suprasegmental variation just to mention some Since we are dealing with low resource languages that have a lot of dialectal and orthographic variation it is difficult to obtain a standard big parallel corpus We work with two different parallel corpora ie SpanishNahuatl and SpanishOtomi Therefore the complexity comparisons are always in reference to Spanish We used a SpanishNahuatl parallel corpus created by GUTIERREZVASQUES161068 However we used only a subset since the whole corpus is not homogeneous ie it comprises several Nahuatl dialects sources periods of time and it lacks of a general orthographic normalization We chose the texts that had a more or less systematic writing On the other hand we used a SpanishOtomi parallel corpus BIBREF22 conformed by 38 texts transcribed from speech This corpus was obtained in San Andrs Cuexcontitlan It is principally composed by narrative texts but also counts with dialogues and elicited data Table 1 shows the size of the parallel corpora used for the experiments We used different morphological analysis tools in order to explore the morphological complexity variation among languages and between the different types of morphological representations We performed lemmatization for Spanish language and morphological segmentation for all languages In NLP morphology is usually tackled by building morphological analysis taggers tools And more commonly lemmatization and stemming methods are used to reduce the morphological variation by converting words forms to a standard form ie a lemma or a stem However most of these technologies are focused in a reduced set of languages For languages like English with plenty of resources and relatively poor morphology morphological processing may be considered solved However this is not the case for all the languages Specially for languages with rich morphological phenomena where it is not enough to remove inflectional endings in order to obtain a stem Lemmatization and stemming aim to remove inflectional endings Spanish has available tools to perform this task We used the tool Freeling Regarding to morphological segmentation we used semisupervised statistical segmentation models obtained with the tool Morfessor BIBREF23 In particular we used the same segmentation models reported in ximena2017bilingual for Spanish and Nahuatl As for Otomi we used manual morphological segmentation of the corpus provided by a specialist We calculated the typetoken relationship for every language in each parallel corpus Table 2 shows the TTR of the texts without any processing ES NA and with the different types of morphological processing morphological segmentation ESmorph NAmorph lemmatization ESlemma In a similar way Table 3 shows the TTR values for the SpanishOtomi corpus It is worth mentioning that the TTR values are only comparable within the same parallel corpus We also calculate the perplexity and complexity for the different languages Since we are focusing on morphological complexity we took only the segmented data for computing the entropy and the perplexity We do not use the lemmatized or non segmented data since this would be equivalent to measuring the combinatorial complexity between words ie syntax In this sense the entropy and perplexity reflects the predictability of the morphs sequences Tables 4 and 5 shows the perplexity and entropy in each language pair When no morphological processing is applied Nahuatl has a lot higher TTR value than Spanish ie a greater proportion of different word forms types In spite of Nahuatl having fewer tokens because of its agglutinative nature it has a lot more types than Spanish This suggests that Nahuatl has a highly productive system that can generate a great number of different morphological forms In other words it is more likely to find a repeated word in Spanish than in a Nahuatl corpus In the case of OtomiSpanish Otomi also has a bigger complexity compared to Spanish in terms of TTR Even though both Otomi and Spanish show fusional patterns in its inflection Otomi also count with a lot of derivational processes and shows regular stem alternations In every case morphological segmentation induced the smallest values of TTR for all languages Suggesting that greater reduction of the morphological complexity is achieved when the words are split into morphs making it more likely to find a repeated item For instance when Nahuatl was morphologically segmented TTR had a dramatic decrease from 2622 to 123 This TTR reduction could be the result of eliminating the combinatorial variety of the agglutinative and polysynthetical morphology of the language Therefore when we segment the text we break this agglutination leading to significantly less diverse units In the case of Otomi language a similar trend can be observed Otomi seems to be morphologically more complex than Spanish in terms of TTR ie more diverse types or word forms When morphological segmentation is applied TTR decreases and Otomi language has a lower TTR compared to Spanish Even though Otomi is not a polysynthetic language like Nahuatl these results suggest that Otomi has also a great combinatory potential of its morphs ie when Otomi gets morphologically segmented we obtain less diverse types these morphs may be recurrent in the text but they can be combined in many several ways within the Otomi word structure Linguistic studies have shown that Otomi language can concatenate several affixes specially in derivative processes BIBREF22 It has brought to our attention that Spanish has a higher TTR than Nahuatl and Otomi only when the languages are morphologically segmented It seems that the morphs inventory is bigger in Spanish we conjecture this is related to the fact that Spanish has more suppletion or irregular forms phenomena BIBREF24 The predictability of the internal structure of word is other dimension of complexity It reflects the difficulty of producing novel words given a set of lexical items stems suffixes or morphs First of all as a general overview we can see that word level models have the lower perplexity and entropy Tables 4 and 5 We believe that this type of models capture better the morphological structure since they take into account the possible combinations of morphs within a word and not outside the bounds of it like the bigram model It is interesting to compare the TTR and the predictability measures for each language In the case of Nahuatl TTR shows that there is a lot of complexity at lexical level many different word forms few repetitions however this contrasts with the predictability of the elements that conform a lexical item the combination of morphs within a word is more predictable than Spanish since it obtains lower values of Perplexity and entropy The combinatorial structure of Nahuatl morphology shows less uncertainty than Spanish one despite the fact that Nahuatl is capable of producing many more different types in the corpus due to its agglutinative and polysynthetic nature The case of Otomi language is different since it seems that it is not only complex in terms of TTR but also in terms of predictability It obtains higher entropy and perplexity than Spanish We conjecture this is related to several phenomena For instance Otomi and Nahuatl allow a large number of morphs combinations to modify a stem inflectional and derivational However Otomi shows phenomena that is not easy to predict for example it has a complex system of inflectional classes stem alternations and prefix changes Moreover tones and prosody plays an important role in the morphology of Otomi verbs BIBREF25 BIBREF26 Also we mentioned before that many of the affixes concatenations in Otomi take place in derivative processes Derivation tends to be less predictable than inflection phenomena derivation is less frequent and less regular and this could be an additional reason of why the entropy values of this language are high In this work we used corpus based measures like TTR entropy and perplexity for exploring the morphological complexity of three languages using two small parallel corpora We use TTR as a measure of morphological productivity of a language and we use the entropy and perplexity calculated over a sequence of morphs as a measure of predictability There may be a common believe that polysynthetical languages are far more complex than analytic ones However it is important to take into account the many factors that lay a role in the complexity of the system We stressed out that morphological complexity has several dimensions that must be taken into account BIBREF3 While some agglutinative polysynthetical languages like Nahuatl could be considered complex by the number of morphemes the combinations and the information than can be encoded in a single word the sequence of these elements may be more predictable than fusional languages like Spanish Languages like Otomi showed high complexity in the two dimensions that we focused in this work this is consistent with qualitative perspectives BIBREF26 These two dimensions of complexity are valid and complementary Measures like TTR reflect the amount of information that words can encode in a language languages that have a high TTR have the potential of encoding a lot of functions at the word level therefore they produce many different word forms Perplexity and entropy measured over a sequence of morphs reflect the predictability or degree of uncertainty of these combinations The higher the entropy hence the perplexity the higher the uncertainty in the combinations of morphs This was a preliminary work Deeper linguistic analysis more corpora and more languages are needed However we believe that quantitative measures extracted from parallel corpora can complement and deepen the study of linguistic complexity Efforts are currently being made BIBREF27 However more studies are needed especially for low resources languages Languages of the world have a wide range of functions that can be codified at the world level Therefore it would be interesting to consider the study of more complexity dimensions in our work Popular quantitative approaches are successful in reflecting how many morphs can be combined into a single word However it is also important to take into account how complex the format of a word can be ie not only how many elements can be combined but also what type of elements For example dahl2009testing argues that when a phoneme is added to a word this process is not as complex as adding a tone Another interesting dimension is the complexity of the morphology in terms of acquisition of native and L2 speakers miestamo2008grammatical points out that this typo of complexity should be made on the basis of psycholinguistics analysis in both processing and acquisition Finally one important factor that influences language complexity is culture In many languages pragmatics nuances are produced via morphological processes For instance languages like Nahuatl have a complex honorific or reverential system that is expressed using different types of affixes Spanish expresses this type of phenomena with morphosyntactic processes It is a challenging task to be able to quantify all these factors that play a role in the complexity of a language This work was supported by the Mexican Council of Science and Technology CONACYT fund 2016012225 and CB2016408885 We also thank the reviewers for their valuable comments and to our friend Morris P Martinez for his unconditional support </s>
<s>[INST] Unsupervised, Knowledge-Free, and Interpretable Word Sense Disambiguation [/INST] The notion of word sense is central to computational lexical semantics Word senses can be either encoded manually in lexical resources or induced automatically from text The former knowledgebased sense representations such as those found in the BabelNet lexical semantic network BIBREF0 are easily interpretable by humans due to the presence of definitions usage examples taxonomic relations related words and images The cost of such interpretability is that every element mentioned above is encoded manually in one of the underlying resources such as Wikipedia Unsupervised knowledgefree approaches eg BIBREF1 BIBREF2 require no manual labor but the resulting sense representations lack the abovementioned features enabling interpretability For instance systems based on sense embeddings are based on dense uninterpretable vectors Therefore the meaning of a sense can be interpreted only on the basis of a list of related senses We present a system that brings interpretability of the knowledgebased sense representations into the world of unsupervised knowledgefree WSD models The contribution of this paper is the first system for word sense induction and disambiguation which is unsupervised knowledgefree and interpretable at the same time The system is based on the WSD approach of Panchenko17 and is designed to reach interpretability level of knowledgebased systems such as Babelfy BIBREF3 within an unsupervised knowledgefree framework Implementation of the system is open source A live demo featuring several disambiguation models is available online In this section we list prominent WSD systems with openly available implementations This section describes 1 how WSD models are learned in an unsupervised way from text and 2 how the system uses these models to enable human interpretable disambiguation in context Figure 1 presents architecture of the WSD system As one may observe no human labor is used to learn interpretable sense representations and the corresponding disambiguation models Instead these are induced from the input text corpus using the JoBimText approach BIBREF8 implemented using the Apache Spark framework enabling seamless processing of large text collections Induction of a WSD model consists of several steps First a graph of semantically related words ie a distributional thesaurus is extracted Second word senses are induced by clustering of an egonetwork of related words BIBREF9 Each discovered word sense is represented as a cluster of words Next the induced sense inventory is used as a pivot to generate sense representations by aggregation of the context clues of cluster words To improve interpretability of the sense clusters they are labeled with hypernyms which are in turn extracted from the input corpus using Hearst92 patterns Finally the obtained WSD model is used to retrieve a list of sentences that characterize each sense Sentences that mention a given word are disambiguated and then ranked by prediction confidence Top sentences are used as sense usage examples For more details about the model induction process refer to BIBREF10 Currently the following WSD models induced from a text corpus are available Word senses based on cluster word features This model uses the cluster words from the induced word sense inventory as sparse features that represent the sense Word senses based on context word features This representation is based on a sum of word vectors of all cluster words in the induced sense inventory weighted by distributional similarity scores Super senses based on cluster word features To build this model induced word senses are first globally clustered using the Chinese Whispers graph clustering algorithm BIBREF9 The edges in this sense graph are established by disambiguation of the related words BIBREF11 BIBREF12 The resulting clusters represent semantic classes grouping words sharing a common hypernym eg animal This set of semantic classes is used as an automatically learned inventory of super senses There is only one global sense inventory shared among all words in contrast to the two previous traditional per word models Each semantic class is labeled with hypernyms This model uses words belonging to the semantic class as features Super senses based on context word features This model relies on the same semantic classes as the previous one but instead sense representations are obtained by averaging vectors of words sharing the same class To enable fast access to the sense inventories and effective parallel predictions the WSD models obtained at the previous step were indexed in a relational database In particular each word sense is represented by its hypernyms related words and usage examples Besides for each sense the database stores an aggregated context word representation in the form of a serialized object containing a sparse vector in the Breeze format During the disambiguation phrase the input context is represented in the same sparse feature space and the classification is reduced to the computation of the cosine similarity between the context vector and the vectors of the candidate senses retrieved from the database This backend is implemented as a RESTful API using the Play framework The graphical user interface of our system is implemented as a single page Web application using the React framework The application performs disambiguation of a text entered by a user In particular the Web application features two modes Single word disambiguation mode is illustrated in Figure 2 In this mode a user specifies an ambiguous word and its context The output of the system is a ranked list of all word senses of the ambiguous word ordered by relevance to the input context By default only the best matching sense is displayed The user can quickly understand the meaning of each induced sense by looking at the hypernym and the image representing the sense Faralli12 showed that Web search engines can be used to acquire information about word senses We assign an image to each word in the cluster by querying an image search API using a query composed of the ambiguous word and its hypernym eg jaguar animal The first hit of this query is selected to represent the induced word sense Interpretability of each sense is further ensured by providing to the user the list of related senses the list of the most salient context clues and the sense usage examples cf Figure 2 Note that all these elements are obtained without manual intervention Finally the system provides the reasons behind the sense predictions by displaying context words triggered the prediction Each common feature is clickable so a user is able to trace back sense cluster words containing this context feature All words disambiguation mode is illustrated in Figure 3 In this mode the system performs disambiguation of all nouns and entities in the input text First the text is processed with a partofspeech and a named entity taggers Next each detected noun or entity is disambiguated in the same way as in the single word disambiguation mode described above yet the disambiguation results are represented as annotations of a running text The best matching sense is represented by a hypernym and an image as depicted in Figure 3 This mode performs semantification of a text which can for instance assist language learners with the understanding of a text in a foreign language Meaning of unknown to the learner words can be deduced from hypernyms and images In our prior work BIBREF10 we performed a thorough evaluation of the method implemented in our system on two datasets showing the stateoftheart performance of the approach as compared to other unsupervised knowledgefree methods for WSD including participants of the SemEval 2013 Task 13 BIBREF13 and two unsupervised knowledgefree WSD systems based on word sense embeddings BIBREF2 BIBREF7 These evaluations were based on the lexical sample setting where the system is expected to predict a sense identifier of the ambiguous word In this section we perform an extra evaluation that assesses how well hypernyms of ambiguous words are assigned in context by our system Namely the task is to assign a correct hypernym of an ambiguous word eg animal for the word Jaguar in the context Jaguar is a large spotted predator of tropical America This task does not depend on a fixed sense inventory and evaluates at the same time WSD performance and the quality of the hypernymy labels of the induced senses In this experiment we gathered a dataset consisting of definitions of BabelNet 37 senses of 1219 frequent nouns In total we collected 56003 sense definitions each labeled with gold hypernyms coming from the IsA relations of BabelNet The average polysemy of words in the gathered dataset was 1550 senses per word as compared to 234 in the induced sense inventory This huge discrepancy in granularities lead to the fact that some test sentences cannot be correctly predicted by definition some mostly rare BabelNet senses simply have no corresponding sense in the induced inventory To eliminate the influence of this idiosyncrasy we kept only sentences that contain at least one common hypernym with all hypernyms of all induced senses The statistics of the resulting dataset are presented in Table 1 it is available in the project repository WSD performance is measured using the accuracy with respect to the sentences labeled with the direct hypernyms Hypers or an extended set of hypernym including hypernyms of hypernyms HyperHypers A correct match occurs when the predicted sense has at least one common hypernym with the gold hypernyms of the target word in a test sentence All evaluated models outperform both random and most frequent sense baselines see Table 2 The latter picks the sense that corresponds to the largest sense cluster BIBREF10 In the case of the traditional per word inventories the model based on the context features outperform the models based on cluster words While sense representations based on the clusters of semantically related words contain highly accurate features such representations are sparse as one sense contains at most 200 features As the result often the model based on the cluster words contain no common features with the features extracted from the input context The sense representations based on the aggregated context clues are much less sparse which explains their superior performance In the case of the super sense inventory the model based solely on the cluster words yielded better results that the contextbased model Note here that 1 the clusters that represent super senses are substantially larger than word sense clusters and thus less sparse 2 words in the super sense clusters are unweighted in contrast to word sense cluster thus averaging of word vectors is more noiseprone Besides the performance scores of the models based on the super sense inventories are substantially lower compared to their counterparts based on the traditional per word inventories Super sense models are able to perform classification for any unknown word missing in the training corpus but their disambiguation task is more complex the models need to choose one of 712 classes as compared to an average of 23 classes for the per word inventories This is illustrated by the nearzero scores of the random and the MFS baselines for this model We present the first openly available word sense disambiguation system that is unsupervised knowledgefree and interpretable at the same time The system performs extraction of word and super sense inventories from a text corpus The disambiguation models are learned in an unsupervised way for all words in the corpus on the basis on the induced inventories The user interface of the system provides efficient access to the produced WSD models via a RESTful API or via an interactive Webbased graphical user interface The system is available online and can be directly used from external applications The code and the WSD models are open source Besides inhouse deployments of the system are made easy due to the use of the Docker containers A prominent direction for future work is supporting more languages and establishing crosslingual sense links We acknowledge the support of the DFG under the JOINT project the RFBR under project no 163700354 mola Amazon via the AWS Research Grants and Microsoft via the Azure for Research programs Finally we also thank four anonymous reviewers for their helpful comments </s>
<s>[INST] Summary Level Training of Sentence Rewriting for Abstractive Summarization [/INST] The task of automatic text summarization aims to compress a textual document to a shorter highlight while keeping salient information of the original text In general there are two ways to do text summarization Extractive and Abstractive BIBREF0 Extractive approaches generate summaries by selecting salient sentences or phrases from a source text while abstractive approaches involve a process of paraphrasing or generating sentences to write a summary Recent work BIBREF1 BIBREF2 demonstrates that it is highly beneficial for extractive summarization models to incorporate pretrained language models LMs such as BERT BIBREF3 into their architectures However the performance improvement from the pretrained LMs is known to be relatively small in case of abstractive summarization BIBREF4 BIBREF5 This discrepancy may be due to the difference between extractive and abstractive approaches in ways of dealing with the taskthe former classifies whether each sentence to be included in a summary while the latter generates a whole summary from scratch In other words as most of the pretrained LMs are designed to be of help to the tasks which can be categorized as classification including extractive summarization they are not guaranteed to be advantageous to abstractive summarization models that should be capable of generating language BIBREF6 BIBREF7 On the other hand recent studies for abstractive summarization BIBREF8 BIBREF9 BIBREF10 have attempted to exploit extractive models Among these a notable one is BIBREF8 in which a sophisticated model called ReinforceSelected Sentence Rewriting is proposed The model consists of both an extractor and abstractor where the extractor picks out salient sentences first from a source article and then the abstractor rewrites and compresses the extracted sentences into a complete summary It is further finetuned by training the extractor with the rewards derived from sentencelevel ROUGE scores of the summary generated from the abstractor In this paper we improve the model of BIBREF8 addressing two primary issues Firstly we argue there is a bottleneck in the existing extractor on the basis of the observation that its performance as an independent summarization model ie without the abstractor is no better than solid baselines such as selecting the first 3 sentences To resolve the problem we present a novel neural extractor exploiting the pretrained LMs BERT in this work which are expected to perform better according to the recent studies BIBREF1 BIBREF2 Since the extractor is a sort of sentence classifier we expect that it can make good use of the ability of pretrained LMs which is proven to be effective in classification Secondly the other point is that there is a mismatch between the training objective and evaluation metric the previous work utilizes the sentencelevel ROUGE scores as a reinforcement learning objective while the final performance of a summarization model is evaluated by the summarylevel ROUGE scores Moreover as BIBREF11 pointed out sentences with the highest individual ROUGE scores do not necessarily lead to an optimal summary since they may contain overlapping contents causing verbose and redundant summaries Therefore we propose to directly use the summarylevel ROUGE scores as an objective instead of the sentencelevel scores A potential problem arising from this apprsoach is the sparsity of training signals because the summarylevel ROUGE scores are calculated only once for each training episode To alleviate this problem we use reward shaping BIBREF12 to give an intermediate signal for each action preserving the optimal policy We empirically demonstrate the superiority of our approach by achieving new stateoftheart abstractive summarization results on CNNDaily Mail and New York Times datasets BIBREF13 BIBREF14 It is worth noting that our approach shows large improvements especially on ROUGEL score which is considered a means of assessing fluency BIBREF11 In addition our model performs much better than previous work when testing on DUC2002 dataset showing better generalization and robustness of our model Our contributions in this work are threefold a novel successful application of pretrained transformers for abstractive summarization suggesting a training method to globally optimize sentence selection achieving the stateoftheart results on the benchmark datasets CNNDaily Mail and New York Times In this paper we focus on singledocument multisentence summarization and propose a neural abstractive model based on the Sentence Rewriting framework BIBREF8 BIBREF15 which consists of two parts a neural network for the extractor and another network for the abstractor The extractor network is designed to extract salient sentences from a source article The abstractor network rewrites the extracted sentences into a short summary The most common way to train extractor to select informative sentences is building extractive oracles as gold targets and training with crossentropy CE loss An oracle consists of a set of sentences with the highest possible ROUGE scores Building oracles is finding an optimal combination of sentences where there are 2n possible combinations for each example Because of this the exact optimization for ROUGE scores is intractable Therefore alternative methods identify the set of sentences with greedy search BIBREF16 sentencelevel search BIBREF9 BIBREF17 or collective search using the limited number of sentences BIBREF15 which construct suboptimal oracles Even if all the optimal oracles are found training with CE loss using these labels will cause underfitting as it will only maximize probabilities for sentences in label sets and ignore all other sentences Alternatively reinforcement learning RL can give room for exploration in the search space BIBREF8 our baseline work proposed to apply policy gradient methods to train an extractor This approach makes an endtoend trainable stochastic computation graph encouraging the model to select sentences with high ROUGE scores However they define a reward for an action sentence selection as a sentencelevel ROUGE score between the chosen sentence and a sentence in the ground truth summary for that time step This leads the extractor agent to a suboptimal policy the set of sentences matching individually with each sentence in a ground truth summary isnt necessarily optimal in terms of summarylevel ROUGE score BIBREF11 proposed policy gradient with rewards from summarylevel ROUGE They defined an action as sampling a summary from candidate summaries that contain the limited number of plausible sentences After training a sentence is ranked high for selection if it often occurs in high scoring summaries However their approach still has a risk of ranking redundant sentences high if two highly overlapped sentences have salient information they would be ranked high together increasing the probability of being sampled in one summary To tackle this problem we propose a training method using reinforcement learning which globally optimizes summarylevel ROUGE score and gives intermediate rewards to ease the learning Transferring representations from pretrained transformer language models has been highly successful in the domain of natural language understanding tasks BIBREF18 BIBREF3 BIBREF19 BIBREF20 These methods first pretrain highly stacked transformer blocks BIBREF21 on a huge unlabeled corpus and then finetune the models or representations on downstream tasks Our model consists of two neural network modules ie an extractor and abstractor The extractor encodes a source document and chooses sentences from the document and then the abstractor paraphrases the summary candidates Formally a single document consists of n sentences Dlbrace s1s2cdots snrbrace We denote ith sentence as silbrace wi1wi2cdots wimrbrace where wij is the jth word in si The extractor learns to pick out a subset of D denoted as hatDlbrace hats1hats2cdots hatskhatsiin Drbrace where k sentences are selected The abstractor rewrites each of the selected sentences to form a summary Slbrace fhats1fhats2cdots fhatskrbrace where f is an abstracting function And a gold summary consists of l sentences Albrace a1a2cdots alrbrace The extractor is based on the encoderdecoder framework We adapt BERT for the encoder to exploit contextualized representations from pretrained transformers BERT as the encoder maps the input sequence D to sentence representation vectors Hlbrace h1h2cdots hnrbrace where hi is for the ith sentence in the document Then the decoder utilizes H to extract hatD from D Although we require the encoder to output the representation for each sentence the output vectors from BERT are grounded to tokens instead of sentences Therefore we modify the input sequence and embeddings of BERT as BIBREF1 did In the original BERTs configure a CLS token is used to get features from one sentence or a pair of sentences Since we need a symbol for each sentence representation we insert the CLS token before each sentence And we add a SEP token at the end of each sentence which is used to differentiate multiple sentences As a result the vector for the ith CLS symbol from the top BERT layer corresponds to the ith sentence representation hi In addition we add interval segment embeddings as input for BERT to distinguish multiple sentences within a document For si we assign a segment embedding EA or EB conditioned on i is odd or even For example for a consecutive sequence of sentences s1 s2 s3 s4 s5 we assign EA EB EA EB EA in order All the words in each sentence are assigned to the same segment embedding ie segment embeddings for w11 w12cdots w1m is EAEAcdots EA An illustration for this procedure is shown in Figure FIGREF1 We use LSTM Pointer Network BIBREF22 as the decoder to select the extracted sentences based on the above sentence representations The decoder extracts sentences recurrently producing a distribution over all of the remaining sentence representations excluding those already selected Since we use the sequential model which selects one sentence at a time step our decoder can consider the previously selected sentences This property is needed to avoid selecting sentences that have overlapping information with the sentences extracted already As the decoder structure is almost the same with the previous work we convey the equations of BIBREF8 to avoid confusion with minor modifications to agree with our notations Formally the extraction probability is calculated as where et is the output of the glimpse operation In Equation DISPLAYFORM9 zt is the hidden state of the LSTM decoder at time t shown in green in Figure FIGREF1 All the W and v are trainable parameters The abstractor network approximates f which compresses and paraphrases an extracted document sentence to a concise summary sentence We use the standard attention based sequencetosequence seq2seq model BIBREF23 BIBREF24 with the copying mechanism BIBREF25 for handling outofvocabulary OOV words Our abstractor is practically identical to the one proposed in BIBREF8 In our model an extractor selects a series of sentences and then an abstractor paraphrases them As they work in different ways we need different training strategies suitable for each of them Training the abstractor is relatively obvious maximizing loglikelihood for the next word given the previous ground truth words However there are several issues for extractor training First the extractor should consider the abstractors rewriting process when it selects sentences This causes a weak supervision problem BIBREF26 since the extractor gets training signals indirectly after paraphrasing processes are finished In addition thus this procedure contains sampling or maximum selection the extractor performs a nondifferentiable extraction Lastly although our goal is maximizing ROUGE scores neural models cannot be trained directly by maximum likelihood estimation from them To address those issues above we apply standard policy gradient methods and we propose a novel training procedure for extractor which guides to the optimal policy in terms of the summarylevel ROUGE As usual in RL for sequence prediction we pretrain submodules and apply RL to finetune the extractor Starting from a poor random policy makes it difficult to train the extractor agent to converge towards the optimal policy Thus we pretrain the network using cross entropy CE loss like previous work BIBREF27 BIBREF8 However there is no gold label for extractive summarization in most of the summarization datasets Hence we employ a greedy approach BIBREF16 to make the extractive oracles where we add one sentence at a time incrementally to the summary such that the ROUGE score of the current set of selected sentences is maximized for the entire ground truth summary This doesnt guarantee optimal but it is enough to teach the network to select plausible sentences Formally the network is trained to minimize the crossentropy loss as follows where st is the tth generated oracle sentence For the abstractor training we should create training pairs for input and target sentences As the abstractor paraphrases on sentencelevel we take a sentencelevel search for each groundtruth summary sentence We find the most similar document sentence sprime t by And then the abstractor is trained as a usual sequencetosequence model to minimize the crossentropy loss where waj is the jth word of the target sentence at and Phi is the encoded representation for sprime t To optimize ROUGE metric directly we assume the extractor as an agent in reinforcement learning paradigm BIBREF28 We view the extractor has a stochastic policy that generates actions sentence selection and receives the score of final evaluation metric summarylevel ROUGE in our case as the return While we are ultimately interested in the maximization of the score of a complete summary simply awarding this score at the last step provides a very sparse training signal For this reason we define intermediate rewards using reward shaping BIBREF12 which is inspired by BIBREF27s attempt for sequence prediction Namely we compute summarylevel score values for all intermediate summaries The reward for each step rt is the difference between the consecutive pairs of scores This measures an amount of increase or decrease in the summarylevel score from selecting hatst Using the shaped reward rt instead of awarding the whole score R at the last step does not change the optimal policy BIBREF12 We define a discounted future reward for each step as Rtsum t1kgamma trt1 where gamma is a discount factor Additionally we add stop action to the action space by concatenating trainable parameters htextstop the same dimension as hi to H The agent treats it as another candidate to extract When it selects stop an extracting episode ends and the final return is given This encourages the model to extract additional sentences only when they are expected to increase the final return Following BIBREF8 we use the Advantage Actor Critic BIBREF29 method to train We add a critic network to estimate a value function VtDhats1cdots hatst1 which then is used to compute advantage of each action we will omit the current state Dhats1cdots hatst1 to simplify where Qtsi is the expected future reward for selecting si at the current step t We maximize this advantage with the policy gradient with the MonteCarlo sample Atsi approx Rt Vt where theta pi is the trainable parameters of the actor network original extractor And the critic is trained to minimize the square loss where theta psi is the trainable parameters of the critic network We evaluate the proposed approach on the CNNDaily Mail BIBREF13 and New York Times BIBREF30 dataset which are both standard corpora for multisentence abstractive summarization Additionally we test generalization of our model on DUC2002 test set CNNDaily Mail dataset consists of more than 300K news articles and each of them is paired with several highlights We used the standard splits of BIBREF13 for training validation and testing 9022612201093 documents for CNN and 1969611214810397 for Daily Mail We did not anonymize entities We followed the preprocessing methods in BIBREF25 after splitting sentences by Stanford CoreNLP BIBREF31 The New York Times dataset also consists of many news articles We followed the dataset splits of BIBREF14 100834 for training and 9706 for test examples And we also followed the filtering procedure of them removing documents with summaries that are shorter than 50 words The final test set NYT50 contains 3452 examples out of the original 9706 The DUC2002 dataset contains 567 documentsummary pairs for singledocument summarization As a single document can have multiple summaries we made one pair per summary We used this dataset as a test set for our model trained on CNNDaily Mail dataset to test generalization Our extractor is built on textBERTtextBASE with finetuning smaller version than textBERTtextLARGE due to limitation of time and space We set LSTM hidden size as 256 for all of our models To initialize word embeddings for our abstractor we use word2vec BIBREF32 of 128 dimensions trained on the same corpus We optimize our model with Adam optimizer BIBREF33 with beta 109 and beta 20999 For extractor pretraining we use learning rate schedule following BIBREF21 with warmup10000 And we set learning rate 1e3 for abstractor and 4e6 for RL training We apply gradient clipping using L2 norm with threshold 20 For RL training we use gamma 095 for the discount factor To ease learning htextstop we set the reward for the stop action to lambda cdot textROUGELtextsummF1S A where lambda is a stop coefficient set to 008 Our critic network shares the encoder with the actor extractor and has the same architecture with it except the output layer estimating scalar for the state value And the critic is initialized with the parameters of the pretrained extractor where it has the same architecture We evaluate the performance of our method using different variants of ROUGE metric computed with respect to the gold summaries On the CNNDaily Mail and DUC2002 dataset we use standard ROUGE1 ROUGE2 and ROUGEL BIBREF34 on full length F1 with stemming as previous work did BIBREF16 BIBREF25 BIBREF8 On NYT50 dataset following BIBREF14 and BIBREF35 we used the limited length ROUGE recall metric truncating the generated summary to the length of the ground truth summary Table TABREF24 shows the experimental results on CNNDaily Mail dataset with extractive models in the top block and abstractive models in the bottom block For comparison we list the performance of many recent approaches with ours As BIBREF25 showed the first 3 sentences lead3 in an article form a strong summarization baseline in CNNDaily Mail dataset Therefore the very first objective of extractive models is to outperform the simple method which always returns 3 or 4 sentences at the top However as Table TABREF27 shows ROUGE scores of lead baselines and extractors from previous work in Sentence Rewrite framework BIBREF8 BIBREF15 are almost tie We can easily conjecture that the limited performances of their full model are due to their extractor networks Our extractor network with BERT BERText as a single model outperforms those models with large margins Adding reinforcement learning BERText RL gives higher performance which is competitive with other extractive approaches using pretrained Transformers see Table TABREF24 This shows the effectiveness of our learning method Our abstractive approaches combine the extractor with the abstractor The combined model BERText abs without additional RL training outperforms the Sentence Rewrite model BIBREF8 without reranking showing the effectiveness of our extractor network With the proposed RL training procedure BERText abs RL our model exceeds the best model of BIBREF8 In addition the result is better than those of all the other abstractive methods exploiting extractive approaches in them BIBREF9 BIBREF8 BIBREF10 Although the proposed RL training inherently gives training signals that induce the model to avoid redundancy across sentences there can be still remaining overlaps between extracted sentences We found that the additional methods reducing redundancies can improve the summarization quality especially on CNNDaily Mail dataset We tried Trigram Blocking BIBREF1 for extractor and Reranking BIBREF8 for abstractor and we empirically found that the reranking only improves the performance This helps the model to compress the extracted sentences focusing on disjoint information even if there are some partial overlaps between the sentences Our best abstractive model BERText abs RL rerank achieves the new stateoftheart performance for abstractive summarization in terms of average ROUGE score with large margins on ROUGEL However we empirically found that the reranking method has no effect or has negative effect on NYT50 or DUC2002 dataset Hence we dont apply it for the remaining datasets Before seeing the effects of our summarylevel rewards on final results we check the upper bounds of different training signals for the full model All the document sentences are paraphrased with our trained abstractor and then we find the best set for each search method Sentencematching finds sentences with the highest ROUGEL score for each sentence in the gold summary This search method matches with the best reward from BIBREF8 Greedy Search is the same method explained for extractor pretraining in section SECREF11 Combination Search selects a set of sentences which has the highest summarylevel ROUGEL score from all the possible combinations of sentences Due to time constraints we limited the maximum number of sentences to 5 This method corresponds to our final return in RL training Table TABREF31 shows the summarylevel ROUGE scores of previously explained methods We see considerable gaps between Sentencematching and Greedy Search while the scores of Greedy Search are close to those of Combination Search Note that since we limited the number of sentences for Combination Search the exact scores for it would be higher The scores can be interpreted to be upper bounds for corresponding training methods This result supports our training strategy pretraining with Greedy Search and final optimization with the combinatorial return Additionally we experiment to verify the contribution of our training method We train the same model with different training signals Sentencelevel reward from BIBREF8 and combinatorial reward from ours The results are shown in Table TABREF34 Both with and without reranking the models trained with the combinatorial reward consistently outperform those trained with the sentencelevel reward We also conduct human evaluation to ensure robustness of our training procedure We measure relevance and readability of the summaries Relevance is based on the summary containing important salient information from the input article being correct by avoiding contradictoryunrelated information and avoiding repeatedredundant information Readability is based on the summarys fluency grammaticality and coherence To evaluate both these criteria we design a Amazon Mechanical Turk experiment based on ranking method inspired by BIBREF36 We randomly select 20 samples from the CNNDaily Mail test set and ask the human testers 3 for each sample to rank summaries for relevance and readability produced by 3 different models our final model that of BIBREF8 and that of BIBREF1 2 1 and 0 points were given according to the ranking The models were anonymized and randomly shuffled Following previous work the input article and ground truth summaries are also shown to the human participants in addition to the three model summaries From the results shown in Table TABREF36 we can see that our model is better in relevance compared to others In terms of readability there was no noticeable difference Table TABREF38 gives the results on NYT50 dataset We see our BERText abs RL outperforms all the extractive and abstractive models except ROUGE1 from BIBREF1 Comparing with two recent models that adapted BERT on their summarization models BIBREF1 BIBREF4 we can say that we proposed another method successfully leveraging BERT for summarization In addition the experiment proves the effectiveness of our RL training with about 2 point improvement for each ROUGE metric We also evaluated the models trained on the CNNDaily Mail dataset on the outofdomain DUC2002 test set as shown in Table TABREF41 BERText abs RL outperforms baseline models with large margins on all of the ROUGE scores This result shows that our model generalizes better There has been a variety of deep neural network models for abstractive document summarization One of the most dominant structures is the sequencetosequence seq2seq models with attention mechanism BIBREF37 BIBREF38 BIBREF39 BIBREF25 introduced Pointer Generator network that implicitly combines the abstraction with the extraction using copy mechanism BIBREF40 BIBREF41 More recently there have been several studies that have attempted to improve the performance of the abstractive summarization by explicitly combining them with extractive models Some notable examples include the use of inconsistency loss BIBREF9 key phrase extraction BIBREF42 BIBREF10 and sentence extraction with rewriting BIBREF8 Our model improves Sentence Rewriting with BERT as an extractor and summarylevel rewards to optimize the extractor Reinforcement learning has been shown to be effective to directly optimize a nondifferentiable objective in language generation including text summarization BIBREF43 BIBREF27 BIBREF35 BIBREF44 BIBREF11 BIBREF27 use actorcritic methods for language generation using reward shaping BIBREF12 to solve the sparsity of training signals Inspired by this we generalize it to sentence extraction to give per step reward preserving optimality We have improved Sentence Rewriting approaches for abstractive summarization proposing a novel extractor architecture exploiting BERT and a novel training procedure which globally optimizes summarylevel ROUGE metric Our approach achieves the new stateoftheart on both CNNDaily Mail and New York Times datasets as well as much better generalization on DUC2002 test set We thank anonymous reviewers for their constructive and fruitful comments This work was supported by the National Research Foundation of Korea NRF grant funded by the Korea government MSIT NRF2016M3C4A7952587 </s>
<s>[INST] Contextual Out-of-Domain Utterance Handling With Counterfeit Data Augmentation [/INST] Recently there has been a surge of excitement in developing chatbots for various purposes in research and enterprise Datadriven approaches offered by common bot building platforms eg Google Dialogflow Amazon Alexa Skills Kit Microsoft Bot Framework make it possible for a wide range of users to easily create dialog systems with a limited amount of data in their domain of interest Although most taskoriented dialog systems are built for a closed set of target domains any failure to detect outofdomain OOD utterances and respond with an appropriate fallback action can lead to frustrating user experience There have been a set of prior approaches for OOD detection which require both indomain IND and OOD data BIBREF0 BIBREF1 However it is a formidable task to collect sufficient data to cover in theory unbounded variety of OOD utterances In contrast BIBREF2 introduced an indomain verification method that requires only IND utterances Later with the rise of deep neural networks BIBREF3 proposed an autoencoderbased OOD detection method which surpasses prior approaches without access to OOD data However those approaches still have some restrictions such that there must be multiple subdomains to learn utterance representation and one must set a decision threshold for OOD detection This can prohibit these methods from being used for most bots that focus on a single task The goal of this paper is to propose a novel OOD detection method that does not require OOD data by utilizing counterfeit OOD turns in the context of a dialog Most prior approaches do not consider dialog context and make predictions for each utterance independently We will show that this independent decision leads to suboptimal performance even when actual OOD utterances are given to optimize the model and that the use of dialog context helps reduce OOD detection errors To consider dialog context we need to connect the OOD detection task with the overall dialog task Thus for this work we build upon Hybrid Code Networks HCN BIBREF4 since HCNs achieve stateoftheart performance in a dataefficient way for taskoriented dialogs and propose AEHCNs which extend HCNs with an autoencoder Figure FIGREF8 Furthermore we release new dialog datasets which are three publicly available dialog corpora augmented with OOD turns in a controlled way exemplified in Table TABREF2 to foster further research In this section we first present the standard HCN model Then we introduce the proposed AEHCNCNN model consisting of an autoencoder and a reconstruction scoreaware HCN model Finally we describe the counterfeit data augmentation method for training the proposed model As shown in Figure FIGREF8 HCN considers a dialog as a sequence of turns At each turn HCN takes a tuple INLINEFORM0 as input to produce the next system action INLINEFORM1 where INLINEFORM2 is a user utterance consisting of INLINEFORM3 tokens ie INLINEFORM4 INLINEFORM5 a onehot vector encoding the previous system action and INLINEFORM6 a contextual feature vector generated by domainspecific code The user utterance is encoded as a concatenation of a bagofwords representation and an average of word embeddings of the user utterance DISPLAYFORM0 where INLINEFORM0 denotes a word embedding layer initialized with GloVe BIBREF5 with 100 dimensions HCN then considers the input tuple INLINEFORM1 to update the dialog state through an LSTM BIBREF6 with 200 hidden units DISPLAYFORM0 Finally a distribution over system actions is calculated by a dense layer with a softmax activation DISPLAYFORM0 On top of HCN AEHCN additionally takes as input an autoencoders reconstruction score INLINEFORM0 for the user utterance for dialog state update Figure FIGREF8 DISPLAYFORM0 The autoencoder is a standard seq2seq model which projects a user utterance into a latent vector and reconstructs the user utterance Specifically the encoder reads INLINEFORM0 using a GRU BIBREF7 to produce a 512dimensional hidden vector INLINEFORM1 which in turn gets linearly projected to a 200dimensional latent vector INLINEFORM2 DISPLAYFORM0 DISPLAYFORM1 The output of the decoder at step INLINEFORM0 is a distribution over words DISPLAYFORM0 DISPLAYFORM1 where INLINEFORM0 has 512 hidden units The reconstruction score INLINEFORM1 is the normalized generation probability of INLINEFORM2 DISPLAYFORM0 AEHCNCNN is a variant of AEHCN where user utterances are encoded using a CNN layer with maxpooling following BIBREF8 rather than equation EQREF5 DISPLAYFORM0 The CNN layer considers two kernel sizes 2 and 3 and has 100 filters for each kernel size To endow an AEHCNCNN model with a capability of detecting OOD utterances and producing fallback actions without requiring real OOD data we augment training data with counterfeit turns We first select arbitrary turns in a dialog at random according to a counterfeit OOD probability INLINEFORM0 and insert counterfeit turns before the selected turns A counterfeit turn consists of a tuple INLINEFORM1 as input and a fallback action INLINEFORM2 as output We copy INLINEFORM3 and INLINEFORM4 of each selected turn to the corresponding counterfeit turns since OOD utterances do not affect previous system action and feature vectors generated by domainspecific code Now we generate a counterfeit INLINEFORM5 and INLINEFORM6 Since we dont know OOD utterances a priori we randomly choose one of the user utterances of the same dialog to be INLINEFORM7 This helps the model learn to detect OOD utterances because a random user utterance is contextually inappropriate just like OOD utterances are We generate INLINEFORM8 by drawing a sample from a uniform distribution INLINEFORM9 where INLINEFORM10 is the maximum reconstruction score of training data and INLINEFORM11 is an arbitrary large number The rationale is that the reconstruction scores of OOD utterances are likely to be larger than INLINEFORM12 but we dont know what distribution the reconstruction scores of OOD turns would follow Thus we choose the most uninformed distribution ie a uniform distribution so that the model may be encouraged to consider not only reconstruction score but also other contextual features such as the appropriateness of the user utterance given the context changes in the domainspecific feature vector and what action the system previously took To study the effect of OOD input on dialog systems performance we use three taskoriented dialog datasets bAbI6 BIBREF9 initially collected for Dialog State Tracking Challenge 2 BIBREF10 GR and GM taken from Google multidomain dialog datasets BIBREF11 Basic statistics of the datasets are shown in Table TABREF22 bAbI6 deals with restaurant finding tasks GM buying a movie ticket and GR reserving a restaurant table respectively We generated distinct action templates by replacing entities with slot types and consolidating based on dialog act annotations We augment test datasets denoted as TestOOD in Table TABREF22 with real user utterances from other domains in a controlled way Our OOD augmentations are as follows These two augmentation types reflect a specific dialog pattern of interest see Table TABREF2 first the user utters a request from another domain at an arbitrary point in the dialog each turn is augmented with the probability INLINEFORM0 which is set to 02 for this study and the system answers accordingly This may go on for several turns in a row each following turn is augmented with the probability INLINEFORM1 which is set to 04 for this study Eventually the OOD sequence ends up and the dialog continues as usual with a segmentlevel OOD content of the user affirming their mistake While we introduce the OOD augmentations in a controlled programmatic way the actual OOD content is natural The OOD utterances are taken from dialog datasets in several foreign domains 1 Frames dataset BIBREF12 travel booking 1198 utterances 2 Stanford KeyValue Retrieval Network Dataset BIBREF13 calendar scheduling weather information retrieval city navigation 3030 utterances 3 Dialog State Tracking Challenge 1 BIBREF14 bus information 968 utterances In order to avoid incompleteelliptical phrases we only took the first users utterances from the dialogs For segmentlevel OOD content we mined utterances with the explicit affirmation of a mistake from Twitter and Reddit conversations datasets 701 and 500 utterances respectively We comparatively evaluate four different models 1 an HCN model trained on indomain training data 2 an AEHCNIndep model which is the same as the HCN model except that it deals with OOD utterances using an independent autoencoderbased rule to mimic BIBREF3 when the reconstruction score is greater than a threshold the fallback action is chosen we set the threshold to the maximum reconstruction score of training data 3 an AEHCNCNN model trained on training data augmented with counterfeit OOD turns the counterfeit OOD probability INLINEFORM0 is set to 15 and INLINEFORM1 to 30 We apply dropout to the user utterance encoding with the probability 03 We use the Adam optimizer BIBREF15 with gradients computed on minibatches of size 1 and clipped with norm value 5 The learning rate was set to INLINEFORM2 throughout the training and all the other hyperparameters were left as suggested in BIBREF15 We performed early stopping based on the performance of the evaluation data to avoid overfitting We first pretrain the autoencoder on indomain training data and keep it fixed while training other components The result is shown in Table TABREF23 Since there are multiple actions that are appropriate for a given dialog context we use perutterance PrecisionK as performance metric We also report f1score for OOD detection to measure the balance between precision and recall The performances of HCN on TestOOD are about 15 points down on average from those on Test showing the detrimental impact of OOD utterances to such models only trained on indomain training data AEHCNCNN outperforms HCN on TestOOD by a large margin about 1720 points on average while keeping the minimum performance tradeoff compared to Test Interestingly AEHCNCNN has even better performance than HCN on Test indicating that with the CNN encoder counterfeit OOD augmentation acts as an effective regularization In contrast AEHCNIndep failed to robustly detect OOD utterances resulting in much lower numbers for both metrics on TestOOD as well as hurting the performance on Test This result indicates two crucial points 1 the inherent difficulty of finding an appropriate threshold value without actually seeing OOD data 2 the limitation of the models which do not consider context For the first point Figure FIGREF24 plots histograms of reconstruction scores for IND and OOD utterances of bAbI6 TestOOD If OOD utterances had been known a priori the threshold should have been set to a much higher value than the maximum reconstruction score of IND training data 616 in this case For the second point Table TABREF25 shows the search for the best threshold value for AEHCNIndep on the bAbI6 task when given actual OOD utterances which is highly unrealistic for the realworld scenario Note that the best performance achieved at 9 is still not as good as that of AEHCNCNN This implies that we can perform better OOD detection by jointly considering other context features Finally we conduct a sensitivity analysis by varying counterfeit OOD probabilities Table TABREF26 shows performances of AEHCNCNN on bAbI6 TestOOD with different INLINEFORM0 values ranging from 5 to 30 The result indicates that our method manages to produce good performance without regard to the INLINEFORM1 value This superior stability nicely contrasts with the high sensitivity of AEHCNIndep with regard to threshold values as shown in Table TABREF25 We proposed a novel OOD detection method that does not require OOD data without any restrictions by utilizing counterfeit OOD turns in the context of a dialog We also release new dialog datasets which are three publicly available dialog corpora augmented with natural OOD turns to foster further research In the presence of OOD utterances our method outperforms stateoftheart dialog models equipped with an OOD detection mechanism by a large margin more than 17 points in PrecisionK on average while minimizing performance tradeoff on indomain test data The detailed analysis sheds light on the difficulty of optimizing contextindependent OOD detection and justifies the necessity of contextaware OOD handling models We plan to explore other ways of scoring OOD utterances than autoencoders For example variational autoencoders or generative adversarial networks have great potential We are also interested in using generative models to produce more realistic counterfeit user utterances </s>
<s>[INST] Efficient keyword spotting using dilated convolutions and gating [/INST] Keyword spotting KWS aims at detecting a predefined keyword or set of keywords in a continuous stream of audio In particular wakeword detection is an increasingly important application of KWS used to initiate an interaction with a voice interface In practice such systems run on lowresource devices and listen continuously for a specific wake word An effective ondevice KWS therefore requires realtime response and high accuracy for a good user experience while limiting memory footprint and computational cost Traditional approaches in keyword spotting tasks involve Hidden Markov Models HMMs for modeling both keyword and background BIBREF0 BIBREF1 BIBREF2 In recent years Deep Neural Networks DNNs have proven to yield efficient smallfootprint solutions as shown first by the fullyconnected networks introduced in BIBREF3 More advanced architectures have been successfully applied to KWS problems such as Convolutional Neural Networks CNNs exploiting local dependencies BIBREF4 BIBREF5 They have demonstrated efficiency in terms of inference speed and computational cost but fail at capturing large patterns with reasonably small models Recent works have suggested RNN based keyword spotting using LSTM cells that can leverage longer temporal context using gating mechanism and internal states BIBREF6 BIBREF7 BIBREF8 However because RNNs may suffer from state saturation when facing continuous input streams BIBREF9 their internal state needs to be periodically reset In this work we focus on endtoend stateless temporal modeling which can take advantage of a large context while limiting computation and avoiding saturation issues By endtoend model we mean a straightforward model with a binary target that does not require a precise phoneme alignment beforehand We explore an architecture based on a stack of dilated convolution layers effectively operating on a broader scale than with standard convolutions while limiting model size We further improve our solution with gated activations and residual skipconnections inspired by the WaveNet style architecture explored previously for texttospeech applications BIBREF10 and voice activity detection BIBREF9 but never applied to KWS to our knowledge In BIBREF11 the authors explore Deep Residual Networks ResNets for KWS ResNets differ from WaveNet models in that they do not leverage skipconnections and gating and apply convolution kernels in the frequency domain drastically increasing the computational cost In addition the longterm dependency our model can capture is exploited by implementing a custom endofkeyword target labeling increasing the accuracy of our model A maxpooling loss trained LSTM initialized with a crossentropy pretrained network is chosen as a baseline as it is one of the most effective models taking advantage of longer temporal contexts BIBREF7 The rest of the paper is organized in two main parts Section System description describes the different components of our model as well as our labeling Section Experiments focuses on the experimental setup and performance results obtained on a publicly available Hey Snips dataset The acoustic features are 20dimensional logMel filterbank energies LFBEs extracted from the input audio every 10ms over a window of 25ms A binary target is used see Section Endofkeyword labeling for more details about labeling During decoding the system computes smoothed posteriors by averaging the output of a sliding context window containing wsmooth frames a parameter chosen after experimental tuning Endtoend models such as the one presented here do not require any postprocessing step besides smoothing as opposed to multiclass models such as BIBREF3 BIBREF4 Indeed the system triggers when the smoothed keyword posterior exceeds a predefined threshold WaveNet was initially proposed in BIBREF10 as a generative model for speech synthesis and other audio generation tasks It consists in stacked causal convolution layers wrapped in a residual block with gated activation units as depicted in Figure 1 Standard convolutional networks cannot capture long temporal patterns with reasonably small models due to the increase in computational cost yielded by larger receptive fields Dilated convolutions skip some input values so that the convolution kernel is applied over a larger area than its own The network therefore operates on a larger scale without the downside of increasing the number of parameters The receptive field r of a network made of stacked convolutions indeed reads r sum i di si 1 where di refers to the dilation rate di1 for normal convolutions and si the filter size of the ith layer Additionally causal convolutions kernels ensure a causal ordering of input frames the prediction emitted at time t only depends on previous time stamps It allows to reduce the latency at inference time As mentioned in BIBREF10 gated activations units a combination of tanh and sigmoid activations controlling the propagation of information to the next layer prove to efficiently model audio signals Residual learning strategies such as skip connections are also introduced to speed up convergence and address the issue of vanishing gradients posed by the training of models of higher depth Each layer yields two outputs one is directly fed to the next layer as usual but the second one skips it All skipconnections outputs are then summed into the final output of the network A large temporal dependency can therefore be achieved by stacking multiple dilated convolution layers By inserting residual connections between each layer we are able to train a network of 24 layers on relatively small amount of data which corresponds to a receptive field of 182 frames or 183s The importance of gating and residual connections is analyzed in Section 332 In addition to reducing the model size dilated convolutions allow the network to run in a streaming fashion during inference drastically reducing the computational cost When receiving a new input frame the corresponding posteriors are recovered using previous computations kept in memory for efficiency purposes as described in Figure 2 This cached implementation allows to reduce the amount of Floating Point Operations per Second FLOPS to a level suiting production requirements Our approach consists in associating a target 1 to frames within a given time interval Delta t before and after the end of the keyword The optimal value for Delta t is tuned on the dev set Additionally a masking scheme is applied discarding background frames outside of the labeling window in positive samples A traditional labeling approach however associates a target 1 to all frames aligned with the keyword In this configuration the model has a tendency to trigger as soon as the keyword starts whether or not the sample contains only a fraction of the keyword One advantage of our approach is that the network will trigger near the end of keyword once it has seen enough context Moreover our labeling does not need any phoneme alignment but only to detect the end of the keyword which is easily obtained with a VAD system Furthermore thanks to masking the precise frontiers of the labeling window are not learned making the network more robust to labeling imprecisions The relative importance of endofkeyword labeling and masking are analyzed in Section UID18 The proposed approach is evaluated on a crowdsourced closetalk dataset The chosen keyword is Hey Snips pronounced with no pause between the two words The dataset contains a large variety of English accents and recording environments Around 11K wake word utterances and 865K sim 96 hours negative examples have been recorded see Table 1 for more details Note that negative samples have been recorded in the same conditions than wakeword utterances therefore arising from the same domain speaker hardware environment etc It thus prevents the model from discerning the two classes based on their domaindependent acoustic features Positive data has been cleaned by automatically removing samples of extreme duration or samples with repeated occurrences of the wake word Positive dev and test sets have been manually cleaned to discard any mispronunciations of the wake word eg Hi Snips or Hey Snaips leaving the training set untouched Noisy conditions are simulated by augmenting samples with music and noise background audio from Musan BIBREF12 The positive dev and test datasets are augmented at 5dB of Signaltonoise Ratio SNR The full dataset and its metadata are available for research purposes Although some keyword spotting datasets are freely available such as the Speech Commands dataset BIBREF13 for voice commands classification there is no equivalent in the specific wakeword detection field By establishing an open reference for wakeword detection we hope to contribute to promote transparency and reproducibility in a highly concurrent field where datasets are often kept private The network consists in an initial causal convolution layer filter size of 3 and 24 layers of gated dilated convolutions filter size of 3 The 24 dilation rates are a repeating sequence of lbrace 1 2 4 8 1 2 4 8rbrace Residual connections are created between each layer and skip connections are accumulated at each layer and are eventually fed to a DNN followed by a softmax for classification as depicted in Figure 1 We used projection layers of size 16 for residual connections and of size 32 for skip connections The optimal duration of the endofkeyword labeling interval as defined in Section Endofkeyword labeling is Delta t 160ms 15 frames before and 15 frames after the end of the keyword The posteriors are smoothed over a sliding context window of wsmooth30 frames also tuned on the dev set The main baseline model is a LSTM trained with a maxpooling based loss initialized with a crossentropy pretrained network as it is another example of endtoend temporal model BIBREF7 The idea of the maxpooling loss is to teach the network to fire at its highest confidence time by backpropagating loss from the most informative keyword frame that has the maximum posterior for the corresponding keyword More specifically the network is a single layer of unidirectional LSTM with 128 memory blocks and a projection layer of dimension 64 following a similar configuration to BIBREF7 but matching the same number of parameters than the proposed architecture see Section UID15 10 frames in the past and 10 frames in the future are stacked to the input frame Standard frame labeling is applied but with the frame masking strategy described in Section Endofkeyword labeling The authors of BIBREF7 mentioned backpropagating loss only from the last few frames but said that the LSTM network performed poorly in this setting The same smoothing strategy is applied on an window wsmooth8 frames after tuning on dev data For comparison we also add as a CNN variant the base architecture tradfpool3 from BIBREF4 a multiclass model with 4 output labels hey sni ps and background Among those proposed in BIBREF4 this is the architecture with the lowest amount of FLOPS while having a similar number of parameters as the two other models studied here see Section UID15 The Adam optimization method is used for the three models with a learning rate of 103 for the proposed architecture 104 for the CNN and 5 cdot 105 for the LSTM baseline Additionally gradient norm clipping to 10 is applied A scaled uniform distribution for initialization BIBREF14 or Xavier initialization yielded the best performance for the three models We also note that the LSTM network is much more sensitive to the chosen initialization scheme The performance of the three models is first measured by observing the False Rejection Rate FRR on clean and noisy 5dB SNR positives samples at the operating threshold of 05 False Alarms per Hour FAH computed on the collected negative data Hyper parameters are tuned on the dev set and results are reported on the test set Table 2 displays these quantities as well as the number of parameters and multiplications per second performed during inference The proposed architecture yields a lower FRR than the LSTM resp CNN baseline with a 94 resp 95 and 86 resp 88 decrease in clean and noisy conditions The number of parameters is similar for the three architectures but the amount of FLOPS is higher by an order of magnitude for the CNN baseline while resulting in a poorer FRR in a noisy environment Figure 3 provides the Detection Error Tradeoff DET curves and shows that the WaveNet model also outperforms the baselines on a whole range of triggering thresholds To assess the relative importance of some characteristics of the proposed architecture we study the difference in FRR observed once each of them is removed separately all things being equal Table 3 shows that the endofkeyword labeling is particularly helpful in improving the FRR at a fixed FAH especially in noisy conditions Masking background frames in positive samples also helps but in a lower magnitude Similarly to what is observed in BIBREF9 gating contributes to improving the FRR especially in noisy conditions We finally observed that removing either residual or skip connections separately has little effect on the performance However we could not properly train the proposed model without any of these connections It seems to confirm that implementing at least one bypassing strategy is key for constructing deeper network architectures This paper introduces an endtoend stateless modeling for keyword spotting based on dilated convolutions coupled with residual connections and gating encouraged by the success of the WaveNet architecture in audio generation tasks BIBREF10 BIBREF9 Additionally a custom frame labeling is applied associating a target 1 to frames located within a small time interval around the end of the keyword The proposed architecture is compared against a LSTM baseline similar to the one proposed in BIBREF7 Because of their binary targets both the proposed model and the LSTM baseline do not require any phoneme alignment or postprocessing besides posterior smoothing We also added a multiclass CNN baseline BIBREF4 for comparison We have shown that the presented WaveNet model significantly reduces the false rejection rate at a fixed false alarm rate of 05 per hour in both clean and noisy environments on a crowdsourced dataset made publicly available for research purposes The proposed model seems to be very efficient in the specific domain defined by this dataset and future work will focus on domain adaptation in terms of recording hardware accents or farfield settings to be deployed easily in new environments We thank Oleksandr Olgashko for his contribution in developing the training framework We are grateful to the crowd of contributors who recorded the dataset We are indebted to the users of the Snips Voice Platform for valuable feedback </s>
<s>[INST] An Open-World Extension to Knowledge Graph Completion Models [/INST] Knowledge graphs are a vital source for disambiguation and discovery in various tasks such as question answering BIBREF0 information extraction BIBREF1 and search BIBREF2 They are however known to suffer from data quality issues BIBREF3 Most prominently since formal knowledge is inherently sparse relevant facts are often missing from the graph To overcome this problem knowledge graph completion KGC or link prediction strives to enrich existing graphs with new facts Formally a knowledge graph mathcal G subset E times R times E consists of facts or triples headreltail where E and R denote finite sets of entities and relations respectively Knowledge graph completion is targeted at assessing the probability of triples not present in the graph To do so a common approach involves representing the entities and relations in triples using realvalued vectors called embeddings The probability of the triple is then inferred by geometric reasoning over the embeddings Embeddings are usually generated by learning to discriminate real triples from randomly corrupted ones BIBREF4 BIBREF5 BIBREF6 A key problem with most existing approaches is that the plausibility of links can be determined for known entities only For many applications however it is of interest to infer knowledge about entities not present in the graph Imagine answering the question What is German actress Julia Lindig known for where Julia Lindig is not a known entity Here information can be inferred from the question typically using word embeddings BIBREF7 BIBREF8 BIBREF9 Similar to entity embeddings these techniques represent words with embedding vectors These can be pretrained on text corpora thereby capturing word similarity and semantic relations which may help to predict the plausibility of the triple JuliaLindig starredin LolaRennt This challenge is known as openworld or zeroshot KGC To the best of our knowledge few openworld KGC models have been proposed so far all of which are full replacements for regular KGC models and require textual descriptions for all entities BIBREF10 BIBREF11 In this paper we suggest a different approach namely to extend existing KGC models with pretrained word embeddings Given an new entity we aggregate its name and description into a textbased entity representation We then learn a transformation from textbased embedding space to graphbased embedding space where we can now apply the graphbased model for predicting links We show that this simple approach yields competitive results and offers two key benefits First it is independent of the specific KGC model used which allows us to use multiple different link prediction models from which we can pick the best one Second as training on the graph structure happens independently from training on text our approach can exploit the fullscale knowledge graph structure in situations where textual information is scarce because learning the transformation is robust even for such situations We coin our approach OWE for Open World Extension and combine it with several common KGC models obtaining TransEOWE DistMultOWE and ComplExOWE We demonstrate competitive results on common datasets for openworld prediction and also introduce a new dataset called FB15k237OWE which avoids bias towards long textual descriptions and trivial regularities like inverse relations The code and the new FB15k237OWE dataset are available online Our approach starts with a regular link prediction model in the following also referred to as the graphbased model as outlined in Section Related Work and visualised in Fig 1 The model scores triples h r t scorehrt phi uh ur ut Eq 6 where ux denotes the embedding of entityrelation x Typically ux in mathbb Rd but other options are possible For example in ComplEx BIBREF6 ux is complexvalued ux in mathbb Cd phi is a scoring function that depends on the link prediction model and will be adressed in more detail in Section Link Prediction Models Since our approach is independent of the specific link prediction model used we test three commonly used models in this work TransE phi uh ur ut uh ur ut2 DistMult phi uh ur ut langle uhurutrangle ComplEx phi uh ur ut textrm Relangle uh ur overlineutrangle Note that the first two use realvalued embeddings while ComplEx uses complexvalued embeddings where overlineu textrm Reu icdot textrm Imu denotes the complex conjugate of embedding u All models are trained using their original loss functions and validated using closedworld validation data We use pretrained word embeddings trained on large text corpora Since the number of entities in the datasets used is limited and we found overfitting to be an issue we omit any refinement of the embeddings We tested 200dimensional Glove embeddings BIBREF8 and 300dimensional Wikipedia2Vec embeddings BIBREF18 Note that Wikipedia2Vec embeddings contain phrase embeddings which we use as an embedding for entity names like Julia Lindig If no phrase embedding is available we split the name into single tokens and use tokenwise embeddings If no embedding is available for a token we use a vector of zeros as an unknown token To aggregate word embeddings to an entity embedding function Psi agg Equation 11 approaches in the literature range from simple averaging BIBREF8 over Long Short Term Memory Networks LSTMs BIBREF23 to relationspecific masking BIBREF11 We use averaging as an aggregation function Here the word embedding vectors are averaged to obtain a single representative embedding To prevent overfitting we apply dropout during training ie embeddings of some words are randomly replaced by the unknown token before averaging The key to openworld prediction is the mapping from textbased entity embeddings ve to graphbased ones ue Several different transformation functions Psi map can be learned for this task In this paper we discuss three options A simple linear function Psi mapv A cdot v For ComplEx separate matrices are used for the real and imaginary part Psi mapv A cdot v i cdot Aprime cdot v Here Psi map is an affine function Psi mapv A cdot v b For ComplEx separate matrices and vectors are trained just like above Psi mapv A cdot v b i cdot Aprime cdot v bprime Psi map is a four layer MultiLayer Perceptron MLP with ReLU activation functions The output layer is affine We did not perform an extensive hyperparameter search here To train the transformations first a link prediction model is trained on the full graph obtaining entity embeddings u1un We then choose all entities ei1eim with textual metadata names andor descriptions and extract textbased embedding vi1vim for them using aggregation see above Finally Psi map is learned by minimizing the loss function LTheta sum k1m Big Big Psi Theta mapvik uik Big Big 2 Eq 22 using batched stochastic gradient descent where Theta denotes the parameters of Psi map eg the weight matrices and bias vectors Ab For ComplEx the above loss is summed for real and imaginary parts and training happens on the sum We apply no finetuning neither on the graph nor on the text embeddings In this section we study the impact of our models parameters Psi agg Psi map text embeddings on prediction performance We also provide mappings of selected openworld entities and compare our results with the stateoftheart Closedworld KGC tasks are commonly evaluated on WordNet and Freebase subsets such as WN18 WN18RR FB15k and FB15k237 For openworld KGC the following datasets have been suggested BIBREF10 introduced FB20k which builds upon the FB15k dataset by adding test triples with unseen entities which are selected to have long textual descriptions BIBREF11 introduced DBPedia50k and DBPedia500k datasets for both openworld and closedworld KGC tasks However the above datasets display a bias towards long textual descriptions DBpedia50k has an average description length of 454 words FB20k of 147 words Also for neither of the datasets precautions have been taken to avoid redundant inverse relations which allows models to exploit trivial patterns in the data BIBREF24 To overcome these problems we introduce a new dataset named FB15k237OWE FB15k237OWE is based on the wellknown FB15K237 dataset where redundant inverse relations have been removed Also we avoid a bias towards entities with longer textual descriptions Test entities are uniformly sampled from FB15K237 and only short Wikidata descriptions 5 words on average are used In the following section the sampling strategy for FB15k237OWE is briefly outlined For tail prediction test set we start with FB15K237 and randomly pick heads by uniform sampling over all head entities Each picked head x is removed from the training graph by moving all triples of the form xt to the test set and dropping all triples of the form x if t still remains in the training set after these operations Similarly a head prediction test set is prepared from the set of dropped triplets which satisfy the conditions to be in head prediction test set ie head must be represented in training set while tail must not be represented The dataset also contains two validation sets A closedworld one with random triples picked from the training set and an openworld one with random triples picked from the test set We evaluate our approach on DBPedia50k FB20k and the new dataset FB15k237OWE Statistics of the datasets are highlighted in Table 1 and Table 2 We perform multiple runs using different KGC models transformation types training data and embeddings used For each run both KGC model and transformation Psi map are trained on the training set the KGC model without using any textual information and the transformation using entity names and descriptions We manually optimize all hyperparameters on the validation set Due to the lack of an openworld validation set on FB20k we randomly sampled 10 of the test triples as a validation set Performance figures are computed using tail prediction on the test sets For each test triple hrt with openworld head h notin E we rank all known entities tprime in E by their score phi hrtprime We then evaluate the ranks of the target entities t with the commonly used mean rank MR mean reciprocal rank MRR as well as Hits1 Hits3 and Hits10 Note that multiple triples with the same head and relation but different tails may occur in the dataset hrt1hrtp Following BIBREF13 when evaluating triple hrti we remove all entities t1ti1ti1tp from the result list All results except MRRraw are reported with this filtered approach Note also that when computing the MRR given a triple hrti only the reciprocal rank of ti itself is evaluated and not the best out of t1titp which would give better results This is common when evaluating KGC models BIBREF13 but differs from ConMasks evaluation code which is why one result in Table 3 differs from BIBREF11 see the mark Note also that BIBREF11 add a second filtering method called target filtering When evaluating a test triple hrt tails tprime are only included in the ranked result list if a triple of the form rtprime exists in the training data otherwise it is skipped We found this to improve quantitative results substantially but it limits the predictive power of the model because tails can never be linked via new relations Therefore we use target filtering only when comparing with the ConMask and DKRL models from BIBREF11 Table 3 For training TransE and DistMult we use the OpenKE framework which provides implementations of many common link prediction models For closedworld graph embedding we use both OpenKE and our own implementation after validating the equivalence of both For training the transformation Psi map we used the Adam optimizer with a learning rate of 103 and batch size of 128 For DBPedia50k we use a dropout of 05 while for FB20k and FB15k237OWE we use no dropout The embedding used is the pretrained 300 dimensional Wikipedia2Vec embedding and the transformation used is affine unless stated otherwise We first compare our model ComplExOWE with other openworld link prediction models in Table 3 For a fair comparison all the results are evaluated using target filtering For all models and all datasets 200dimensional Glove embeddings were used except for the ComplexOWE300 which uses 300dimensional Wikipedia2Vec embeddings The effect of different embeddings will be studied further in Section Text Embeddings and Robustness To Missing Entity Metadata The results for Target Filtering Baseline DKRL and ConMask were obtained by the implementation provided by BIBREF11 The Target Filtering Baseline is evaluated by assigning random scores to all targets that pass the target filtering criterion DKRL uses a twolayer CNN over the entity descriptions ConMask uses a CNN over the entity names and descriptions along with the relationbased attention weights It can be seen from Table 3 that our best model ComplExOWE300 performs competitively when compared to ConMask On DBPedia50k our model performs best on all metrics except Hits10 On FB20k it is outperformed by a small margin by ConMask but performs better on Hits1 On FB15k237OWE our model outperforms all other models significantly We believe that this is due to FB15k237OWE having very short descriptions ConMask generally relies on extracting information from the description of entities with its attention mechanism whereas our model relies more on extracting information from the textual corpus that the word embedding were trained on This enables our model to provide good results without relying on having long descriptions Our OWE extension for openworld link prediction can be used with any common KGC model Therefore we evaluate three commonly used options namely TransE DistMult and ComplEx Results are displayed in Table 4 All three models are trained with embedding dimensionality d300 on the closedworld dataset For text embeddings Wikipedia2Vec embeddings of the same dimensionality were used It can be seen that the performance on the openworld setting matches the expressiveness of the models ComplExOWE with its ability to model even asymmetric relations yields the best results while the symmetric DistMultOWE achieves a similar performance We also test different transformation functions Psi map as illustrated in Table 5 It can be observed that quite simple transformations achieve the strong results The best performance is achieved by the affine transformation with 491 HITS10 by a margin of 24 percent In some cases the knowledge graph may lack textual metadata both the name and description for some or all of its entities Other models like ConMask and DKRL are dependant on textual descriptions eg ConMask uses attention mechanisms to select relationspecific target words from long texts Therefore ConMask and DKRL would require completely dropping triples without metadata and be unable to learn about the link structure of such entities as they use joint training However in our approach we have to drop such entities only during the phase where the transformation Psi map is learned while the link prediction model can still be learned on the full graph To demonstrate the robustness of our approach to missing entity metadata we reevaluate accuracy when randomly dropping metadata for training entities Fig 2 outlines the performance for two scenarios Dropping descriptions We remove only the textual descriptions for a varying percentage of randomly selected entities between 20 to 100 The names of these entities are not removed and therefore we still train Psi map on them Dropping all metadata We randomly select entities and remove both their descriptions and names effectively removing these entities from the training set altogether when training Psi map We also included a baseline experiment to simulate an unsuccessful learning of Psi map In this baseline when evaluating a test triple we replace its head by the embedding of another random head from the training data Note that this baseline still gives some reasonable hits for triples where the relation is a strong indicator For example if we have a triplet Xtimezone Even if the head X is unknown a model can achieve reasonable accuracy by simply guessing time zones as tails Overall Fig 2 suggests that transformation learning is able to generalize well even with very limited training data In Fig 2 a only the descriptions of entities have been removed For Wikipedia2Vec embeddings this removal has virtually no effect on prediction accuracy We believe that this is because Wikipedia2Vec embeddings are trained such that we can lookup strong entity embeddings by the name alone Even when removing 100 of descriptions ie only training on the entity names accuracy is only 23 lower than training on the full graph However in case of Glove embeddings the drop in performance is very significant especially when the description is dropped for all the entities In Fig 2 b we remove not only descriptions but also entity names Even in this case learning is robust If half of the 12324 training entities are removed the drop in accuracy is less than 1 Only when removing 90 of training data leaving 123 training entities performance starts to deteriorate significantly This highlights the ability of our model to learn from a limited amount of training data when it is important to be able to train the KGC model itself on all the entities Finally we inspect sample prediction results for ComplExOWE300 in Table 6 Besides the final prediction we also test whether our transformation from textbased to semantic space is successful For each test triple we represent the openworld head entity by its textbased embedding vhead match it to a graphbased embedding Psi mapvhead and estimate the nearest neighbor entities in this space We use the Euclidean distance on the real part of the ComplEx embeddings but found results to be similar for the imaginary part If the transformation works well we expect these nearest neighbors to be semantically similar to the head entity This is obviously the case For Bram Stoker the author of Dracula the nearest neighbors are other authors of fantasy literature For Parma the neighbors are cities predominantly in Italy For Bachelor of Science the model predicts appropriate entities namely Universities but even though we apply filtering the predictions are not rewarded This is because the corresponding triples like Bachelor of Science institution Harward Law School are missing in the knowledge graph In this work we have presented a simple yet effective extension to embeddingbased knowledge graph completion models such as ComplEx DistMult and TransE to perform openworld prediction Our approach which we named OWE maps textbased entity descriptions learned from word embeddings to the pretrained graph embedding space In experiments on several datasets including the new FB15K237OWE dataset we introduced in this work we showed that the learned transformations yield semantically meaningful results that the approach performs competitive with respect to the state of the art and that it is robust to scarce text descriptions An interesting direction of future work will be to combine our model with approaches like ConMask BIBREF11 which 1 exploit more complex aggregation functions and 2 use relationspecific attentioncontent masking to draw more precise embeddings from longer descriptions This work was partially funded by the German Federal Ministry of Education and Research Program FHprofUnt Project DeepCA 13FH011PX6 and the German Academic Exchange Service Project FIBEVID 57402798 </s>
<s>[INST] Fine-grained Analysis of Sentence Embeddings Using Auxiliary Prediction Tasks [/INST] Parameters of the encoderdecoder were tuned on a dedicated validation set We experienced with different learning rates 01 001 0001 dropoutrates 01 02 03 05 BIBREF11 and optimization techniques AdaGrad BIBREF6 AdaDelta BIBREF30 Adam BIBREF15 and RMSprop BIBREF29 We also experimented with different batch sizes 8 16 32 and found improvement in runtime but no significant improvement in performance Based on the tuned parameters we trained the encoderdecoder models on a single GPU NVIDIA Tesla K40 with minibatches of 32 sentences learning rate of 001 dropout rate of 01 and the AdaGrad optimizer training takes approximately 10 days and is stopped after 5 epochs with no loss improvement on a validation set Parameters for the predictions tasks as well as classifier architecture were tuned on a dedicated validation set We experimented with one two and three layer feedforward networks using ReLU BIBREF23 BIBREF8 tanh and sigmoid activation functions We tried different hidden layer sizes the same as the input size twice the input size and one and a half times the input size We tried different learning rates 01 001 0001 dropout rates 01 03 05 08 and different optimization techniques AdaGrad AdaDelta and Adam Our best tuned classifier which we use for all experiments is a feedforward network with one hidden layer and a ReLU activation function We set the size of the hidden layer to be the same size as the input vector We place a softmax layer on top whose size varies according to the specific task and apply dropout before the softmax layer We optimize the loglikelihood using AdaGrad We use a dropout rate of 08 and a learning rate of 001 Training is stopped after 5 epochs with no loss improvement on the development set Training was done on a single GPU NVIDIA Tesla K40 How well do the models preserve content when we increase the sentence length In Fig FIGREF4 we plot content prediction accuracy vs sentence length for different models As expected all models suffer a drop in content accuracy on longer sentences The degradation is roughly linear in the sentence length For the encoderdecoder models with fewer dimensions seem to degrade slower In this section we report the significance tests we conduct in order to evaluate our findings In order to do so we use the paired ttest BIBREF25 All the results reported in the summery of findings are highly significant pvalue INLINEFORM0 00001 The ones we found to be not significant pvalue INLINEFORM1 003 are the ones which their accuracy does not have much of a difference ie ED with size 500 and ED with size 750 tested on the word order task pvalue011 or CBOW with dimensions 750 and 1000 pvalue03 </s>
<s>[INST] Red Dragon AI at TextGraphs 2019 Shared Task: Language Model Assisted Explanation Generation [/INST] The Explanation Regeneration shared task asked participants to develop methods to reconstruct gold explanations for elementary science questions BIBREF1 using a new corpus of gold explanations BIBREF2 that provides supervision and instrumentation for this multihop inference task Each explanation is represented as an explanation graph a set of atomic facts between 1 and 16 per explanation drawn from a knowledge base of 5000 facts that together form a detailed explanation for the reasoning required to answer and explain the resoning behind a question Linking these facts to achieve strong performance at rebuilding the gold explanation graphs requires methods to perform multihop inference which has been shown to be far harder than inference of smaller numbers of hops BIBREF3 particularly for the case here where there is considerable uncertainty at a lexical level of how individual explanations logically link somewhat fuzzy graph nodes The WorldTree corpus BIBREF2 is a new dataset is a comprehensive collection of elementary science exam questions and explanations Each explanation sentence is a fact that is related to science or common sense and is represented in a structured table that can be converted to freetext For each question the gold explanations have lexical overlap ie having common words and are denoted as having a specific explanation role such as CENTRAL core concepts GROUNDING linking core facts to the question and LEXICAL GLUE linking facts which may not have lexical overlap As described in the introduction the general task being posed is one of multihop inference where a number of atomic fact sentences must be combined to form a coherent chain of reasoning to solve the elementary science problem being posed These explanatory facts must be retrieved from a semistructured knowledge base in which the surface form of the explanation is represented as a series of terms gathered by their functional role in the explanation For instance for the explanation Grass snakes live in grass is encoded as Grass snakes live in grass and this explanation is found in a PROTOHABITATS table However in the same table there are also more elaborate explanations for example Mice live in in holes in the ground in fields in forests is expressed as mice live in in holes in the ground in fields OR in forests And more logically complex Most predators live innear the same environment as their prey being expressed as most predators live in OR live near the same environment as their prey So whereas the simpler explanations fit in the usual KnowledgeBase triples paradigm the more complex ones are much more nuanced about what actually constitutes a node and how reliable the arcs are between them Indeed there is also a collection of ifthen explanations including examples such as if something has a positive impact on something else then increasing the amount of that something has a positive impact on that something else where the explanation has metaeffect on the graph itself and includes unbound variables In this work we used the pure textual form of each explanation problem and correct answer rather than using the semistructured form given in the columnoriented files provided in the dataset For each of these we performed PennTreebank tokenisation followed by lemmatisation using the lemmatisation files provided with the dataset and then stopword removal Concerned by the low performance of the Python Baseline method compared to the Scala Baseline which seemed to operate using an algorithm of similar strength we identified an issue in the organizers evaluation script where predicted explanations that were missing any of the gold explanations were assigned a MAP score of zero This dramatically penalised the Python Baseline since it was restricted to only returning 10 lines of explanation It also effectively forces all submissions to include a ranking over all explanations a simple fix with the Python Baseline rescored in Table 1 will be submitted via GitHub This should also make the uploadscoring process faster since only the top scriptstyle sim 1000 explanation lines meaningfully contribute to the rank scoring Although more classic graph methods were initially attempted along the lines of BIBREF4 where the challenge of semantic drift in multihop inference was analysed and the effectiveness of information extraction methods was demonstrated the following 3 methods which now easily surpass the score of our competition submission were ultimately pursued due to their simplicityeffectiveness As mentioned above the original TFIDF implementation of the provided Python baseline script did not predict a full ranking and was penalized by the evaluation script When this issue was remedied its MAP score rose to 02140 However there are three main steps that significantly improve the performance of this baseline The original question text included all the answer choices only one of which was correct while the others are distractors Removing the distractors resulted in improvement The TFIDF algorithm is very sensitive to keywords Using the provided lemmatisation set and NLTK for tokenisation helped to align the different forms of the same keyword and reduce the vocabulary size needed Stopword removal gave us approximately 004 MAP improvement throughout removing noise in the texts that was evidently distracting for TFIDF As shown in Table 2 these optimisation steps increased the Python Baseline score significantly without introducing algorithmic complexity While graph methods have shown to be effective for multihop question answering the schema in the textgraphs dataset is unconventional as illustrated earlier To counter this the previous TFIDF method was extended to simulate jumps between explanations inspired by graph methods but without forming any actual graphs TFIDF vectors are precomputed for all questions and explanation candidates For each question the closest explanation candidate by cosine proximity is selected and their TFIDF vectors are aggregated by a max operation The next closest unused explanation is selected and this process was then applied iteratively up to maxlen128 times with the current TFIDF comparison vector progressively increasing in expressiveness At each iteration the current TFIDF vector was downscaled by an exponential factor of the length of the current explanation set as this was found to increase development set results by up to 00344 By treating the TFIDF vector as a representation of the current chain of reasoning each successive iteration builds on the representation to accumulate a sequence of explanations The algorithm outlined above was additionally enhanced by adding a weighting factor to each successive explanation as it is added to the cumulative TFIDF vector Without this factor the effectiveness was lower because the TFIDF representation itself was prone to semantic drift away from the original question Hence each successive explanations weight was downscaled and this was shown to work well Large pretrained language models have been proven effective on a wide range of downstream tasks including multihop question answering such as in BIBREF5 on the RACE dataset and BIBREF6 which showed that large finetuned language models can be beneficial for complex question answering domains especially in a dataconstrained context Inspired by this we decided to adapt BERT BIBREF7 a popular language model that has produced competitive results on a variety of NLP tasks for the explanation generation task For our BERT Reranking method we attach a regression head to a BERT Language Model This regression head is then trained to predict a relevance score for each pair of question and explanation candidate The approach is as follows Calculate a TFIDF relevance score for every tokenised explanation against the tokenised Problem CorrectAnswer Gold explanations in the training set This will rate the true explanation sentences very highly but also provide a soft tail of rankings across all explanations Use this relevance score as the prediction target of the BERT regression head where BERT makes its predictions from the original Problem CorrectAnswer text combined with each potential Explanation text in turn over the training set At prediction time the explanations are ranked according to their relevance to Problem CorrectAnswer as predicted by the BERT models output We cast the problem as a regression task rather than a classification task since treating it as a task to classify which explanations are relevant would result in an imbalanced dataset because the gold explanation sentences only comprise a small proportion of the total set By using soft targets given to us by the TFIDF score against the gold answers in the training set even explanations which are not designated as gold but have some relevance to the gold paragraph can provide learning signal for the model Due to constraints in compute and time the model is only used to rerank the topn64 predictions made by the TFIDF methods The BERT model selected was of Base size with 110M parameters which had been pretrained on BooksCorpus and English Wikipedia We did not further finetune it on texts similar to the TextGraphs dataset prior to regression training In other tests we found that the Large size model did not help improve the final MAP score The authors initial attempts at tackling the Shared Task focussed on graphbased methods However as identified in BIBREF3 the uncertainty involved with interpreting each lexical representation combined with the number of hops required meant that this line of enquiry was put to one side While the graphlike approach is clearly attractive from a reasoning point of view and will be the focus of future work we found that using purely the textual aspects of the explanation database bore fruit more readily Also the complexity of the resulting systems could be minimised such that the description of each system could be as consise as possible Specifically we were able to optimise the TFIDF baseline to such an extent that our Optimised TFIDF would now place 2nd in the submission rankings even though it used no special techniques at all The Iterated TFIDF method while more algorithmically complex also does not need any training on the data before it is used This shows how effective traditional text processing methods can be when used strategically The BERT Reranking method in contrast does require training and also applies one of the more sophisticated Language Models available to extract more meaning from the explanation texts Figure 1 illustrates how there is a clear trend towards being able to build longer explanations as our semantic relevance methods become more sophisticated There are also clear trends across the data in Table 3 that show that the more sophisticated methods are able to bring more CENTRAL explanations into the mix even though they are more textually distant from the original Question and Answer statements Surprisingly this is at the expense of some of the GROUNDING statements Since these methods seem to focus on different aspects of solving the ranking problem we have also explored averaging the ranks they assign to the explanations essentially ensembling their decisions Empirically this improves performance at the expense of making the model more obscure Despite our apparent success with less sophisticated methods it seems clear that more explicit graphbased methods appears will be required to tackle the tougher questions in this dataset for instance those that require logical deductions as illustrated earlier or hypothetical situations such as some predictorprey equilibrium problems Even some simple statements such as Most predators present obstacles to existing KnowledgeBase representations In terms of concrete next steps we are exploring the idea of creating intermediate forms of representation where textual explanations can be linked using a graph to plan out the logical steps However these grander schemes suffer from being incrementally less effective than finding additional smart tricks for existing methods In preparation we have begun to explore doing more careful preprocessing notably Exploiting the structure of the explanation tables individually since some columns are known to be relationshiptypes that would be suitable for labelling arcs between nodes in a typical Knowledge Graph setting Expanding out the conjunction elements within the explanation tables For instance in explanations like coral lives in the ocean OR warm water the different subexplanations Coral LIVESIN Ocean and Coral LIVESIN WarmWater can be generated which are far closer to a graphable representation Better lemmatisation For instance ice cube covers both ice and ice cube nodes We need some more common sense to cover these cases Clearly it is early days for this kind of multihop inference over textual explanations At this point we have only scratched the surface of the problem and look forward to helping to advance the stateoftheart in the future The authors would like to thank Google for access to the TFRC TPU program which was used in training and finetuning models during experimentation for this paper </s>
<s>[INST] Impact of Sentiment Detection to Recognize Toxic and Subversive Online Comments [/INST] Online communities abound today forming on social networks on webforums within videogames and even in the comments sections of articles and videos While this increased international contact and exchange of ideas has been a net positive it has also been matched with an increase in the spread of highrisk and toxic content a category which includes cyberbullying racism sexual predation and other negative behaviors that are not tolerated in society The two main strategies used by online communities to moderate themselves and stop the spread of toxic comments are automated filtering and human surveillance However given the sheer number of messages sent online every day human moderation simply cannot keep up and either leads to a severe slowdown of the conversation if messages are premoderated before posting or allows toxic messages to be seen and shared thousands of times before they are deleted if they are postmoderated after being posted and reported In addition human moderation cannot scale up easily to the number of messages to monitor for example Facebook has a team of 20000 human moderators which is both massive compared to the total of 25000 other employees in the company and minuscule compared to the fact its automated algorithms flagged messages that would require 180000 human moderators to review Keyword detection on the other hand is instantaneous scales up to the number of messages and prevents toxic messages from being posted at all but it can only stop messages that use one of a small set of denied words and are thus fairly easy to circumvent by introducing minor misspellings ie writing kl urself instead of kill yourself In BIBREF0 the authors show how minor changes can elude even complex systems These attempts to bypass the toxicity detection system are called subverting the system and toxic users doing it are referred to as subversive users In this paper we consider an alternative strategy for toxic message filtering Our intuition is that while toxic keywords can easily be disguised the toxic emotional tone of the message cannot Consequently we will study the correlation between sentiment and toxicity and its usefulness for toxic message detection both in subversive and nonsubversive contexts The rest of this paper is structured as follows After a review of the relevant literature in the next section we will consider the problem of sentiment detection in online messages in Section SECREF3 Next we will study the measure of toxicity and its correlation to message sentiment in Section SECREF4 Finally we will draw some concluding remarks in Section SECREF5 Given the limitations of human and keywordbased toxicity detection systems mentioned previously several authors have studied alternative means of detecting toxicity In one of the earliest works on the detection of hate speech the authors of BIBREF1 used ngrams enhanced by partofspeech information as features to train an SVM classifier to accurately pick out antisemitic online messages Following a similar idea the authors of BIBREF2 conducted a study of the usefulness of various linguistic features to train a machine learning algorithm to pick out hate speech They found that the most useful single feature was character ngrams followed closely by word ngrams However it was a combination of all their features ngrams features of language features of syntax and word embedding vectors that achieved the highest performance The authors of BIBREF3 studied hate speech through the detection of othering language They built a custom lexicon of pronouns and semantic relationships in order to capture the linguistic differences when describing the ingroup and outgroup in messages and trained a word embedding model on that data Hate speech is not the only form of toxicity that has been studied In BIBREF4 the authors studied cyberbullying They developed a list of 300 bad words sorted in five levels of severity Next they used the number and density of bad words found in each online message as the features to train a set of machine learning systems The authors of BIBREF5 also used words as featured in two systems this time to detect sexual predators One used the TFxIDF values of the words of the text to train a singleclass SVM classifier and the other used a bagofwords vector of the text as input to a deep neural network The authors found that the latter system offered the better performance in their experiments Recently deep learning has become very popular for NLP applications and pretrained word embeddings have been shown to be very effective in most textbased neural network applications In BIBREF6 four different deep learning models were implemented and shown to outperform benchmark techniques for cyberbullying detection on three different datasets In BIBREF7 a deep neural network taking a word embedding vector as input was used to detect cyberbullying on Twitter It thus appears from the related literature that authors have tried a variety of alternative features to automatically detect toxic messages without relying strictly on keyword detection However sentiment has rarely been considered It was one of the inputs of the deep neural network of BIBREF7 but the paper never discussed its importance or analyzed its impact The authors of BIBREF8 conducted the first study of cyberbullying in Dutch and considered several features including a subjectivity keyword lexicon They found its inclusion helped improve results but that a more sophisticated source of information than simple keyword detection was required And the study of BIBREF9 used the sentiment of messages as measured by the SentiStrength online system as one of several features to detect cyberbullying messages However an indept analysis of how sentiment can benefit toxicity detection has not been done in any of these papers and a study of the use of sentiment in a subversive context has never been done Sentiment detection or the task of determining whether a document has a positive or negative tone has been frequently studied in the literature It is usually done by using a sentiment lexicon that either classifies certain words as positive or negative or quantifies their level of positivity or negativity We decided to consider six such lexicons SentiWordNet is a widelyused resource for sentiment mining It is based on WordNet and assigns three scores to each synset namely positivity negativity and objectivity with the constraint that the sum of all three must be 1 Using this lexicon requires a bit of preprocessing for us since the same word can occur in multiple different synsets with different meanings and therefore different scores Since picking out the intended meaning and synset of a polysemous word found in a message is beyond our scope we instead chose to merge the different meanings and compute a weighted average of the scores of the word The weights are the ranks of the synsets which correspond to the popularity of that meaning of the word in documents The average score equation is DISPLAYFORM0 where INLINEFORM0 is the number of times the word occurs with the same part of speech We compute the average positivity and negativity scores but not the objectivity scores since they are not useful for our purpose and since they are simply the complement of the other two This allows us to extract 155287 individual words from the lexicon with a positivity and negativity score between 0 and 1 for each We should note that SentiWordNet differentiates a word based on partofspeech and we maintain this distinction in our work Afinn is a lexicon of 3382 words that are rated between 5 maximum negativity and 5 maximum positivity To match SentiWordNet we split this score into positivity and negativity scores between 0 and 1 For example a word with a INLINEFORM0 score was changed to have a positive score of 0 and a negative score of INLINEFORM1 Bing Liu compiled lists of 6789 positive or negative words Given no other information we assigned each word in the positive list a positivity score of 1 and a negativity score of 0 and viceversa for the negativelist words General Inquirer is a historicallypopular lexicon of 14480 words though only 4206 of them are tagged as either positive or negative As for the Bing Liu lexicon we assigned binary positive and negative scores to each word that was tagged as positive or negative Subjectivity Clues extends the sentiment tags of the General Inquirer up to 8222 words using a dictionary and thesaurus It also adds a binary strength level strong or weak to the polarity information We merged polarity and strength as a measure of 05 and 1 for weak or strong positivity or negativity NRC has a list of 14182 words that are marked as associated 1 or not associated 0 with 8 emotions anger fear anticipation trust surprise sadness joy disgust and two sentiments negative and positive We transform this association into binary positive and negative scores in the same way we did for Bing Liu and General Inquirer All six of these lexicons have limitations which stem from their limited vocabulary and the ambiguity of the problem Indeed despite being thousands of words each and covering the same subject and purpose our six lexicons have only 394 words in common indicating that each is individually very incomplete compared to the others And we can easily find inconsistencies between the ratings of words both internally within each lexicon and externally when we compare the same words between lexicons Table TABREF16 illustrate some of these inconsistencies for instance the word helpless is very negative in SentiWordNet but less so in Afinn and Subjectivity Clues while the word terrorize is more strongly negative in the latter two resources but less negative and even a bit positive in SentiWordNet Likewise the word joke is strongly positive weakly positive or even negative depending on the lexicon used and the word merry is more positive than joke according to every lexicon except SentiWordnet which rates it equally positive and negative By contrast the word splendid has the same positivity values as merry in all lexicons except SentiWordnet where it has the highest possible positivity score In a longer document such as the customer reviews these lexicons are typically used on BIBREF10 BIBREF11 BIBREF12 these problems are minor the abundance and variety of vocabulary in the text will insure that the correct sentiment emerges overall despite the noise these issues cause This is not true for the short messages of online conversations and it has forced some authors who study the sentiments of microblogs to resort to creating or customizing their own lexicons BIBREF13 This incidentally is also why we could not simply use an existing sentiment classifier We will instead opt to combine these lexicons into a more useful resource The first preprocessing step is to detect the presence and scope of negations in a message Negations have an important impact the word good may be labeled positive in all our lexicons but its actual meaning will differ in the sentences this movie is good and this movie is not good We thus created a list of negation keywords by combining together the lists of the negex algorithm and of BIBREF14 filtering out some irrelevant words from these lists and adding some that were missing from the lists but are found online Next we need to determine the scope of the negation which means figuring out how many words in the message are affected by it This is the challenge of for example realizing that the negation affects the word interesting in this movie is not good or interesting but not in this movie is not good but interesting We considered two algorithms to detect the scope of negations The first is to simply assume the negation affects a fixed window of five words after the keyword BIBREF15 while the second discovers the syntactic dependencies in the sentence in order to determine precisely which words are affected BIBREF16 We tested both algorithms on the SFU review corpus of negation and speculation As can be seen in Table TABREF21 the dependency algorithm gave generally better results and managed to find the exact scope of the negation in over 43 of sentences However that algorithm also has a larger standard deviation in its scope meaning that when it fails to find the correct scope it can be off by quite a lot while the fixed window is naturally bounded in its errors Moreover the increased precision of the dependencies algorithm comes at a high processing cost requiring almost 30 times longer to analyze a message as the fixed window algorithm Given that online communities frequently deal with thousands of new messages every second efficiency is a major consideration and we opted for the simple fixed window algorithm for that reason The second preprocessing step is to detect sentimentcarrying idioms in the messages For example while the words give and up can both be neutral or positive the idiom give up has a clear negative sentiment Several of these idioms can be found in our lexicons especially SentiWordNet slightly over INLINEFORM0 We detect them in our messages and mark them so that our algorithm will handle them as single words going forward Finally we use the NLTK wordpunkttokenizer to split sentences into words and the Stanford fasterEnglishPOSTagger to get the partofspeech of each word Since our lexicons contain only four partsofspeech noun verb adverb and adjective and Stanfords tagger has more than 30 possible tags we manually mapped each tag to one of the four partsofspeech for example verb past participle maps to verb Once every word has a positivity and a negativity score we can use them to determine the sentiment of an entire message We do this by computing separately the sum of positive scores and of negative scores of words in the message and subtracting the negative total from the positive total In this way a score over 0 means a positive message and a score under 0 means a negative message We consider two alternatives at this point one in which we sum the sentiment value of all words in the sentence and one where we only sum the sentiment value of the topthree words with the highest scores for each polarity We label these All words and Top words in our results The impact of this difference is felt when we consider a message with a few words with a strong polarity and a lot of words with a weak opposite polarity in the Top words scheme these weak words will be ignored and the strong polarity words will dictate the polarity of the message while in the All words scheme the many weak words can sum together to outweigh the few strong words and change the polarity of the message We optionally take negations into account in our sentiment computation When a word occurs in the word window of a negation we flip its positivity and negativity scores In other words instead of adding its positivity score to the positivity total of the sentence we added its negativity score and the other way round for the negativity total Experiments where we do that are labeled Negativity in our results Finally we optionally incorporate word weights based on their frequency in our datasets When applied the score of each word is multiplied by a frequency modifier which we adapted from BIBREF10 DISPLAYFORM0 where INLINEFORM0 is the number of times the word appears in a dataset and INLINEFORM1 is the number of times the most frequent word appears in that dataset Experiments using this frequency modifier are labeled Frequency in our results Our experiments have four main objectives 1 to determine whether the All words or the Top words strategy is preferable 2 to determine whether the inclusion of Negation and Frequency modifiers is useful 3 to determine which of the six lexicons is most accurate and 4 to determine whether a weighted combination of the six lexicons can outperform any one lexicon To conduct our experiments we used the corpus of annotated news comments available from the Yahoo Webscope program The comments in this dataset are annotated by up to three professional trained editors to label various attributes including type sentiment and tone Using these three attributes we split the dataset into two categories sarcastic and nonsarcastic and then again into five categories clear negative slight negative neutral slight positive and clear positive Finally we kept only the nonsarcastic comments where all annotators agreed to reduce noise This gives us a test corpus of 2465 comments To evaluate our results we compute the sentiment score of each comment in our test corpus using our various methods and we then compute the average sentiment score of comments in each of the five sentiment categories For ease of presentation we give a simplified set of results in Table TABREF26 with only the average score of the two negative and the two positive labels combined along with the overlap of the two distributions The overlap is obtained by taking two normal distributions with the the means and standard deviations of the positive and the negative sets and calculating the area in common under both curves It gives us a measure of the ambiguous region where comments may be positive or negative A good sentiment classifier will thus have very distant positive and negative scores and a very low overlap These results show that there are important differences between the lexicons Three of the six are rather poor at picking out negative sentiments namely Subjectivity Clues where negative sentences are on average detected as more positive than the positive sentences General Inquirer and NRC This bias for positivity is an issue for a study on toxicity which we expect to be expressed using negative sentiments The other three lexicons give a good difference between positive and negative sentences For these three lexicons we find that using All words increases the gap between positive and negative sentence scores but greatly increases the standard deviation of each sentiment class meaning the sentiment of the messages becomes ambiguous On the other hand using Top words reduces the overlap between the distributions and thus gives a better separation of positive and negative sentiments And while adding frequency information or negations does not cause a major change in the results it does give a small reduction in overlap To study combinations of lexicons we decided to limit our scope to SentiWordNet Afinn and Bing Liu the three lexicons that could accurately pick out negative sentiments and on the Top words strategy We consider three common strategies to combine the results of independent classifiers majority voting picking the one classifier with the maximum score which is assumed to be the one with the highest confidence in its classification and taking the average of the scores of all three classifiers For the average we tried using a weighted average of the lexicons and performed a grid search to find the optimal combination However the best results were obtained when the three lexicons were taken equally For the majority vote we likewise take the average score of the two or three classifiers in the majority sentiment Table TABREF27 presents the results we obtained with all three strategies It can be seen that combining the three classifiers outperforms taking any one classifier alone in the sense that it creates a wider gap between the positive and negative sentences and a smaller overlap It can also be seen that the addition of negation and frequency information gives a very small improvement in the results in all three cases Comparing the three strategies it can be seen that the maximum strategy is the one with the biggest gap in between positive and negative distribution which was to be expected since the highest positive or negative sentiment is selected each time while it gets averaged out in the other two classifiers However the average score strategy creates a significantly smaller standard deviation of sentiment scores and a lower overlap between the distributions of positive and negative sentences For that reason we find the average score to be the best of the three combination strategies In all cases we find that most misclassified sentences in our system are due to the lack of insults in the vocabulary For example none of the lexicons include colorful insults like nut job and fruitcake so sentences where they appear cannot be recognized as negative Likewise some words such as the word gay are often used as insults online but have positive meanings in formal English this actually leads to labeling insult messages as positive sentences This issue stems from the fact that these lexicons were designed for sentiment analysis in longer and more traditional documents such as customer reviews and editorials One will seldom if ever find insults especially politicallyincorrect ones such as the previous examples in these documents The main contribution of this paper is to study how sentiment can be used to detect toxicity in subversive online comments To do this we will use three new test corpora Our first experiment consists in computing the sentiment of each message in each of our three test corpora and verifying how they correlate with the different toxicity scores of each of the corpora Following the results we found in Section SECREF3 we used the best three lexicons SentiWordNet Afinn and Bing Liu combined them by taking the average score and used our four algorithm variations The results are presented in Table TABREF37 It can be seen that there is a clear negative correlation between toxicity and sentiment in the messages as expected Our results also show that using words only or including frequency information makes the relationship clearer while adding negations muddies it These results are consistent over all three test corpora despite being from different sources and labeled using different techniques The lower score on the Reddit dataset may simply be due to the fact it was labeled automatically by a system that flags potentially dangerous content and not by human editors so its labels may be noisier For example mentioning sexual body parts will be labeled as toxicity level 5 even if they are used in a positive sentence because they carry more potential risk Our second experiment consists in studying the benefits of taking sentiments into account when trying to determine whether a comment is toxic or not The toxicity detector we implemented in this experiment is a deep neural network inspired by the most successful systems in the Kaggle toxicity competition we used as a dataset It uses a biGRU layer with kernel size of 40 The final state is sent into a single linear classifier To avoid overfitting two 50 dropout layers are added one before and one after the biGRU layer The network takes as input a sentence split into words and into individual characters The words are represented by the 300d fastText pretrained word embeddings and characters are represented by a onehot character encoding but restricted to the set of 60 most common characters in the messages to avoid the inclusion of noise Finally we used our top frequency sentiment classifier with the average of the best three lexicons SentiWordNet Afinn and Bing Liu to determine the sentiment of each message We input that information into the neural network as three sentiment values corresponding to each of the three lexicons used for each of the frequent words retained for the message Words that are not among the selected frequent words or that are not found in a lexicon receive a sentiment input value of 0 Likewise experiments that do not make use of sentiment information have inputs of 0 for all words These input values are then concatenated together into a vector of 363 values corresponding to the 300 dimensions of fastText the 60 onehot character vector and the 3 sentiment lexicons The output of our network is a binary toxic or nontoxic judgment for the message In the Kaggle dataset this corresponds to whether the toxic label is active or not In the Reddit dataset it is the set of messages evaluated at levels 5 6 or 7 by Community Sift in any of the topics mentioned earlier And in the Wikipedia dataset it is any message marked as toxic by 5 workers or more We chose this binary approach to allow the network to learn to recognize toxicity as opposed to types of toxic messages on Kaggle keyword severity on Reddit or a particular workers opinions on Wikipedia However this simplification created a balance problem while the Reddit dataset is composed of 12 toxic messages and 88 nontoxic messages the Wikipedia dataset is composed of 18 toxic messages and the Kaggle dataset of 10 toxic messages To create balanced datasets we kept all toxic messages and undersampled randomly the set of nontoxic messages to equal the number of toxic messages Our experiment consists in comparing the toxicity detection accuracy of our network when excluding or including sentiment information and in the presence of subversion Indeed as mentioned in Sections SECREF1 and SECREF2 it is trivial for a subversive user to mask toxic keywords to bypass toxicity filters In order to simulate this behavior and taking ideas from BIBREF0 we created a substitution list that replaces popular toxic keywords with harmless versions For example the word kill is replaced by kilt and bitch by beach Our list contains 191 words and its use adds noise to INLINEFORM0 of the toxic Kaggle messages INLINEFORM1 of the Wikipedia messages and INLINEFORM2 of the Reddit messages These substitutions are only done at testing time and not taken into account in training to simulate the fact that users can create neverbeforeseen modifications We trained and tested our neural network with and without sentiment information with and without subversion and with each corpus three times to mitigate the randomness in training In every experiment we used a random 70 of messages in the corpus as training data another 20 as validation data and the final 10 as testing data The average results of the three tests are given in Table TABREF40 It can be seen that sentiment information helps improve toxicity detection in all cases The improvement is smaller when the text is clean However the introduction of subversion leads to an important drop in the accuracy of toxicity detection in the network that uses the text alone and the inclusion of sentiment information gives an important improvement in that case Comparing the different corpora it can be seen that the improvement is smallest in the Reddit dataset experiment which is expected since it is also the dataset in which toxicity and sentiment had the weakest correlation in Table TABREF37 We can note that the system performs very well in all cases even with subversion and without sentiment information This may be due to the fact that the messages in all datasets are usergenerated and therefore noisy already In addition the character encoding of the neural network is robust to misspellings as opposed to a keyword lookup system In this paper we explored the relationship between sentiment and toxicity in social network messages We began by implementing a sentiment detection tool using different lexicons and different features such as word frequencies and negations This tool allowed us to demonstrate that there exists a clear correlation between sentiment and toxicity Next we added sentiment information to a toxicity detection neural network and demonstrated that it does improve detection accuracy Finally we simulated a subversive user who attempts to circumvent the toxicity filter by masking toxic keywords in their messages and found that using sentiment information improved toxicity detection by as much as 3 This confirms our fundamental intuition that while it is possible for a user to mask toxic words with simple substitutions it is a lot harder for a user to conceal the sentiment of a message Our work so far has focused on singleline messages and general toxicity detection There are however several different types of toxicity some of which correlate to different sentiments For instance while cyberbullying and hate speech have negative sentiments other forms of toxicity such as fraud or sexual grooming will use more positive sentiments in order to lure victims We expect that differentiating between these types of toxicity will strengthen the correlation to message sentiment and further improve our results Likewise handling entire conversations instead of individual messages will allow us to include contextual information to better model the sentiment of the message and to detect sudden changes in the sentiment of the conversation that may correspond to a disruptive toxic comment This research was made possible by the financial material and technical support of Two Hat Security Research Corp and the financial support of the Canadian research organization MITACS </s>
<s>[INST] Phase transitions in a decentralized graph-based approach to human language [/INST] This letter arises from two intriguing questions about human language The first question is To what extent language and also language evolution can be viewed as a graphtheoretical problem Language is an amazing example of a system of interrelated units at different organization scales Several recent works have stressed indeed the fact that human languages can be viewed language as a complex network of interacting parts BIBREF0 BIBREF1 BIBREF2 BIBREF3 Within the graphbased approach to human language one may think wordmeaning mappings that is vocabularies as bipartite graphs formed by two sets words and meanings BIBREF2 The second question is What is the nature of the language evolution process that affects the shape of graphbased language representations To answer this question we assume that human communication is constrained at least by two forces BIBREF2 one that pushes towards communicative success and another one that faces the tradeoff between speaker and hearer efforts The first force involves simpler decentralized models of linguistic interactions within populations of artificial agents endowed with minimal human cognitive features negotiating pieces of a common language the socalled language games BIBREF4 BIBREF5 BIBREF6 BIBREF7 In the simplest language game the naming game BIBREF8 BIBREF9 at discrete time step a pair of players typically one speaker and one hearer interacts towards agreement on wordmeaning associations Next we also consider the communication cost to establish wordmeaning mappings G Zipf referred to the lexical tradeoff between two competing pressures ambiguity and memory as the least effort principle BIBREF10 BIBREF11 speakers prefer to minimize memory costs whereas hearers prefer to minimize disambiguation costs As remarked by several works an interesting proposal has stated that humanlike vocabularies appear as a phase transition at a critical stage for both competing pressures BIBREF12 BIBREF13 BIBREF14 BIBREF15 BIBREF16 The appearance of a drastic stage of competing pressures can be understood moreover as an explanation of the empirical Zipfs law which establishes a dichotomy between lowmemory words like the word the and lowambiguity words like the word cat Within a statistical point of view text corpora evidence strong scaling properties in wordfrequencies BIBREF17 BIBREF18 BIBREF19 BIBREF20 BIBREF21 BIBREF22 BIBREF23 BIBREF24 The main aim is to address a decentralized approach based on a previous proposal of two authors of this letter BIBREF25 to the emergence of Zipfian properties in a humanlike language while players communicate with each other using bipartite wordmeaning mappings To structurally characterize changes in the system our methodology is mainly based on a phase transition description arising from both classical statistical mechanics tools and graphmining techniques We run numerical simulations over simple population topologies We apply graphmining techniques particularly a clustering notion for bipartite graphs BIBREF26 A bipartite graph is a triple Btop bot E where top and bot are two mutually disjoint set of nodes and E subseteq top times bot is the set of edges of the graph Here top represents the set of word nodes whereas bot represents the set of meaning nodes We remark that edges only exist between word nodes and meaning nodes A classical useful tool in graph theory is the matrix representation of graphs Here we only consider the adjacency matrix Let us denote by A awm the adjacency matrix for the bipartite graph B From the bipartite sets top and bot representing respectively word and meaning nodes we define the rows of A as word nodes and the columns as meaning nodes where awm1 if the word w is joined with the meaning m and 0 otherwise The neighbors of order 1 of uin top are the nodes at distance 1 Nulbrace vin bot uv in Erbrace if u in bot the definition is analogous Let us denote by NNu the set of nodes at distance 2 from u The degree du of the node u is simply defined by duNu We denote by dmax W max w in W dw the maximum degree for word nodes top Analogously dmax M max m in M dm the maximum degree for meaning nodes bot The notion of clustering coefficient in classical graphs captures the fact that when there is an edge between two nodes they probably have common neighbors More generally such notion captures correlations between neighborhoods Based on this point of view BIBREF26 proposed a clustering coefficient notion for bipartite graphs where ccuv is a notion of clustering defined for pairs of nodes in the same set top or bot Interestingly ccuv captures the overlap between the neighborhoods of u and v if u and v do not share neighbors ccuv0 if they have the same neighborhood ccuv1 To give an overall overview of bipartite clustering for the graph B the average bipartite clustering reads The language game is played by a finite population of participants P1p sharing both a set of words Wlbrace 1nrbrace and a set of meanings Mlbrace 1mrbrace Each player kin P is endowed with a graphbased wordmeaning mapping Bktop kbot kEk In our case Bk is a bipartite graph with two disjoint sets top k subseteq W word nodes and bot k subseteq M meaning nodes Each player k in P only knows its own graph Bk Two technical terms are introduced First we say that a player k in P knows the word w in W if w in top k Clearly this definition is equivalent to the existence of the edge wm in Ek for some m in bot k Second the ambiguity of the word w denoted aw is defined as its node degree dw The dynamics of the language game is based on pairwise speakerhearer interactions at discrete time steps At t geqslant 0 a pair of players is selected uniformly at random one plays the role of speaker s and the other plays the role of hearer h where sh in P Each speakerhearer communicative interaction is defined by two successive steps The speakercentered STEP 1 involves the selection of a meaning and a word to transmit them At STEP 2 the hearer receives the wordmeaning association and both speaker and hearer behave according to either repair or alignment strategies STEP 1 To start the communicative interaction the speaker s selects the topic of the conversation one meaning m in M To transmit the meaning m the speaker needs to choose some word denoted w There are two possibilities for the selection of w if the edge wm notin Es for any w in top s the speaker chooses uniformly at random the word w from the set W and adds the edge wm to the graph Bs otherwise if wm in Es for some w in top s the speaker calculates w based on its interests that is based on its own conflict between ambiguity and memory To calculate w for the second case wm in Es the speaker behaves according to the ambiguity parameter wp in 01 Let random in 01 be a random number Then two actions are possible if random geqslant wp the speaker calculates w as the least ambiguous word otherwise the speaker calculates w as the most ambiguous word The speaker transmits the word w to the hearer STEP 2 The hearer behaves as in the naming game On the one hand mutual speakerhearer agreement if the hearer knows the word w involves alignment strategies BIBREF9 On the other hand a speakerhearer disagreement if the hearer does not know the word w involves a repair strategy in order to increase the chance of future agreements that is for tprime t More precisely if the hearer knows the word w both speaker and hearer remove all edges formed by wm where w respectively belongs to top s setminus lbrace wrbrace and top h setminus lbrace wrbrace otherwise the hearer adds the edge wm to its graph Bh The population of agents is located on the vertices of a complete graph of size P100 typically called the mean field approximation For the description of other simple graph topologies see the caption of Fig FIGREF15 The population shares both a set of nW128 words and a set of mM128 meanings Starting from an initial condition in which each player k in P is associated to a bipartite graph Bk where Bkij 1 or Bkij 0 with probability 05 put differently for each possible edge ij i in W and j in M exists with probability 05 the dynamics performs a speakerhearer interaction at each discrete time step t geqslant 0 The bipartite wordmeaning mappings Bs and Bh are then reevaluated according to communicative success All results consider averages over 10 initial conditions and 3times 105 time steps We denote by tf the final time step The ambiguity parameter wp is varied from 0 to 1 with an increment of 1 Two key quantities have been analyzed for different values of wp the average population clustering cc which captures the average correlation between word neighborhoods and the effective lexicon size at time step t Vt defined as BIBREF12 BIBREF25 where Vt1 if top kn while Vt0 if top k0 Three clear domains can be noticed in the behavior of langle cc rangle versus wp at tf as shown in Fig FIGREF15 blue squares Phase I langle cc rangle increases smoothly for wp 04 indicating that for this domain there is a small correlation between word neighborhoods Full vocabularies are attained also for wp 04 Phase II a drastic transition appears at the critical domain wp in 0406 in which langle cc rangle shifts abruptly towards 1 An abrupt change in Vtf versus wp is also found Fig FIGREF16 for wp Phase III singleword languages dominate for wp 06 The maximum value of langle cc rangle indicate that word neighborhoods are completely correlated We now shift our focus from graphbased measures towards a holistic level in which we illustrate the described phase transition using bipartite graph representations of language formation We stress the fact that our framework based on a language game with players endowed with bipartite wordmeaning mappings is able to visualize the structural changes of the three phases I II and III Fig FIGREF18 display from top to bottom the bipartite wordmeaning mappings for ambiguity parameters wp in lbrace 01 0521rbrace As expected there are radical structural changes between bipartite graphs associated to such ambiguity parameters Full vocabularies are attained for wp 01 Phase I located at the hearercentered phase Zipfian vocabularies seem to appear for wp 052 Phase II where speaker and hearer costs have a similar value Finally a singleword vocabulary that is one word several meanings is exhibited for wp 1 Phase III The appearance of the threephased language behavior described here is closely related to previous results of two authors of this letter BIBREF25 Indeed in the cited paper the energylike functional eKL a kullbackleiblerbased measure is minimized around the parameter wp approx 05 Remarkably here it is showed numerically that around the critical parameter wp approx 052 a drastic transition for both the effective vocabulary and the bipartite average clustering tends to appear see Fig FIGREF18 A first strategy to profound on the problem established between the phase transitions described here and energybased approaches is to measure the informationtheoretic energy Omega wp tf as defined in BIBREF12 as a function of the parameter wp Omega wp tf is a combination of the respective efforts of speakers and hearers Omega wp tfwp HRS1wp HS Figure FIGREF20 showed that Omega wp tf is minimized around wp approx 05 This suggests a new way to understand language evolution and formation by reconciling models focused on selforganization and informationtheoretic accounts In this letter we have described a decentralized model of the emergence of Zipfian features in a humanlike language where agents play language games communicating with bipartite wordmeaning mappings The model evidences a phase transition that corresponds to the formation of a humanlike vocabulary satisfying Zipfian wordmeaning properties Our central graphmining tool has been a notion of clustering for bipartite graphs This function allowed us to suggest that the drastic transition is in some sense a qualitative transition in words correlations To further understand the nature of the described transition we remark a recent proposal BIBREF28 reinterpreting an old question about language learning with a novel approach if language learning by a child involves setting many parameters to what extent all these need to be innate According to the Principles and Parameters theory BIBREF29 children are biologically endowed with a general grammar and then the simple exposition to a particular language for example Quechua fixes its syntax by equalizing parameters This debate was illuminated by proposing a statistical mechanics approach in which the distribution of grammar weights where language is modeled by weighted contextfree grammars evidences a drastic transition Language learning is for this proposal a transition from a random model of grammar parameterweights to the one in which deep structure that is syntax is encountered Here the language learning problem is situated in a decentralized process with agents negotiating a common wordmeaning mapping exhibiting Zipfian scaling properties Interestingly our approach can shed light on the debate opened by BIBREF28 Indeed our model questioned first the fact that language learning is traditionally viewed as an individual process without any consideration of population structure in general language games question this fact Secondly we argue that our view pointed out the minimal necessity of cognitive principles for cultural language formation the least effort principle We hypothesize that players only need the most basic cognitive features for language learning and formation and the rest is an emergent property from the local speakerhearer interactions It is interesting to remark that several works have stressed the fact that language formation can be viewed as a phase transition within an informationtheoretic approach BIBREF12 BIBREF13 BIBREF14 BIBREF15 BIBREF16 Future work could explore an intriguing hypothesis Zipfian properties have strong consequences for syntax and symbolic reference BIBREF30 has proposed indeed that Zipfs law is a necessary precondition for full syntax and for going beyond simple wordmeaning mappings They hypothesized moreover that the appearance of syntax have been as abrupt as the transition to Zipfs law This is a goal for future work to propose a decentralized model in which agents constrained by specific cognitive features develop a Zipfian language that acts as a precondition for the abrupt transition to simple forms of syntax based for example on BIBREF31 Another related research line arises from models assuming the interplay between maximization of the information transfer and minimization of the entropy of signals see for example BIBREF32 As previously remarked these models evidence a lack of population structure Current work asks how a community of individuals playing the language game proposed here can minimize the energy functional Omega lambda of wordmeaning mappings We may hypothesize that reaching global consensus at the critical phase that is for wp approx 05 is closely related to the global minima of Omega lambda as a first approach to this problem see Section SECREF19 This idea opens fascinating novel ways to study human language reconciling models seeing communication as a global minima of information entropic energies and models focused on populations selforganizing themselves towards a shared consensus FU thanks CONICYT Chile for financial support under the Grant 3180227 </s>
<s>[INST] Improving Pre-Trained Multilingual Models with Vocabulary Expansion [/INST] Comments An approach to handle the OOV issue in multilingual BERT is proposed A great deal of nice experiments were done but ultimately and in message board discussions the reviewers agreed there wasnt enough novelty or result here to justify acceptance The paper addresses the OOV problem of multilingual language models In particular the authors extend Bert by a mapping from monolingual language models and evaluate it on a variety of tasks both tokenlevel and sentencelevel prediction tasks The main strengths of the paper are a sound experimental part with many experiments and results that give a clear picture of the pros and cons of the approach The main weakness of the paper in my opinion lies in the presentation and motivation of the methods which I find a bit confusing see my clarification questions below for details and which I hope the authors will improved for the cameraready version of the paper An exploration how BERT performs in multilingual settings and how it could be improved I dont see risks with accepting this paper Overall Recommendation 4 If I understood the introduction correctly you do not want to train monolingual models on largescale corpora because this is timeconsuming and resourceintensive However isnt that exactly what you need to do in order to apply your mapping methods line 298 Typically OOV problems are investigated at the word level and taken as solved at the subword level Since byte pair encoding falls back to single characters if needed I am a bit suprised that OOV seems to be such a big issue with byte pair encoding Can the authors explain how this happens Or give examples for OOVs in the different languages In Table 2 for example the OOVsw numbers are pretty large for Arabic possibly because of the different alphabet but also for Polish and Swedish In the introduction you motivate that vocabulary sizes for multilingual models are typically small because of expensive softmax computations What about using classbased LMs with languages as classes so that you can first predict the language and then the word given that language as in Mikolov et al 2011 Extensions of recurrent neural network language model Line 160 I find it surprising to just ignore the shared subwords Can you provide numbers please for how many subwords are actually shared among the language you consider to see if you can savely ignore them I would assume that a lot of them are actually shared How can you get noncontextualized embeddings from BERT This paper proposes three approaches to address the outofvocabulary problem in multilingual BERT as many languages share the same vocabulary the vocabulary size for each language particularly lowresource ones is comparatively small The first approach learns a mapping between a languagespecific pretrained embedding space from fastText and the BERT embedding space as in Madhyastha et al 2016 The second approach maps all languages to the English embedding space using a crosslingual word embedding method MUSE Then this joint embedding space is again mapped to the BERT embedding space with another transformation The third approach represents all subwords as a mixture of English subwords as in Gu et al 2018 The first two approaches are found not to perform competitively so many of the experiments are done only with the third approach The paper reports results on POS tagging codemixed sequence labeling and reading comprehension where it creates a new Chi neseEnglish dataset and MT quality estimation The mixture model slightly outperforms multilingual BERT Strengths The paper addresses a timely and relevant problem The paper conducts a large number of experiments Weaknesses One thing I would have liked to see that would motivate the problem that this paper is addressing is an analysis that shows how small the vocabulary actually is Such an analysis would also help make clear whether this is only a problem that appears in a massively multilingual setting or whether this is already an issue with five or ten languages A shared multilingual vocabulary is a feature that is not unique to BERT but can be found in any model that is jointly trained with many languages It would have been good to compare this approach with another model either another Transformerbased model that uses subword embeddings such as GPT an LSTM based model with subword embeddings or a multilingual NMT model I found it somewhat irritating that the first two methods are presented in detail and after the first experiment section only the third method is used as the first two do not perform very well IMO it would strengthen the paper if they were either discussed less or if it was more analyzed why they do not work well Sgaard et al 2018 find that embedding spaces learned with different methods cannot be easily mapped onto each other as discussed in Section 32 To control for this effect it would have been nice to try mapping to an embedding space obtained with the same method eg the English or Chinese BERT and investigate if the methods still perform poorly The paper mainly applies existing methods to a novel model minor The baselines in Table 2 are somewhat outdated A more recent method to compare against is Yasunaga et al NAACL 2018 httpsarxivorgabs171104903 The community would become aware of some ways to address the outofvocabulary setting with stateoftheart models even though most of these methods have been proposed before While the outofvocabulary problem in large multilingual models is an important issue in my opinion this paper leaves too many questions open and misses out on investigating and analyzing important issues Overall Recommendation 2 Conneau et al 2017 should be Conneau et al 2018 the paper was presented at ICLR 2018 The paper cites Conneau et al 2017 several times as a reference for the observation that it is difficult to map embeddings learned with different methods eg in line 340 This is not observed in this paper as far as Im aware Sgaard et al 2018 should be cited here instead Sgaard et al 2018 should also be cited for the bilingual lexicon using identical subwords line 311 Sgaard A Ruder S Vuli I 2018 On the Limitations of Unsupervised Bilingual Dictionary Induction In Proceedings of ACL 2018 This paper investigates 3 methods for expanding vocabulary for multilingual pretrained word embeddings such as multilingual BERT using subword units The argument is that the vocabulary for each language in pretrained multilingual word embeddings is usually small and it is costly to retrain with bigger vocab This lead to a high outofvocab rate for downstream tasks which lowers the performance They expand the vocab of target pretrained word embeddings by mapping language specific embeddings to this space They experimented with 3 mapping methods independent mapping joint mapping and mixture mapping on various tasks including both tokenlevel to sequencelevel tasks In overall they achieved better accuracy compared with the pretrained word embeddings without vocab expansion I quite like the discussion part in section 36 shedding more light on the performance gain The strength of this paper is the extensive set of experiments but their best model using mixture map ping is already proposed in Gu et al 2018 limiting their contribution A simple method with good results in many tasks lack of novelty Overall Recommendation 35 </s>
<s>[INST] Explaining Predictions of Non-Linear Classifiers in NLP [/INST] Following seminal work by Bengio and Collobert the use of deep learning models for natural language processing NLP applications received an increasing attention in recent years In parallel initiated by the computer vision domain there is also a trend toward understanding deep learning models through visualization techniques BIBREF0 BIBREF1 BIBREF2 BIBREF3 BIBREF4 BIBREF5 or through decision tree extraction BIBREF6 Most work dedicated to understanding neural network classifiers for NLP tasks BIBREF7 BIBREF8 use gradientbased approaches Recently a technique called layerwise relevance propagation LRP BIBREF4 has been shown to produce more meaningful explanations in the context of image classifications BIBREF9 In this paper we apply the same LRP technique to a NLP task where a neural network maps a sequence of word2vec vectors representing a text document to its category and evaluate whether similar benefits in terms of explanation quality are observed In the present work we contribute by 1 applying the LRP method to the NLP domain 2 proposing a technique for quantitative evaluation of explanation methods for NLP classifiers and 3 qualitatively and quantitatively comparing two different explanation methods namely LRP and a gradientbased approach on a topic categorization task using the 20Newsgroups dataset We consider the problem of explaining a prediction fx associated to an input x by assigning to each input variable xd a score Rd determining how relevant the input variable is for explaining the prediction The scores can be pooled into groups of input variables eg all word2vec dimensions of a word or all components of a RGB pixel such that they can be visualized as heatmaps of highlighted texts or as images Layerwise relevance propagation BIBREF4 is a newly introduced technique for obtaining these explanations It can be applied to various machine learning classifiers such as deep convolutional neural networks The LRP technique produces a decomposition of the function value fx on its input variables that satisfies the conservation property fx textstyle sum d Rd Eq 3 The decomposition is obtained by performing a backward pass on the network where for each neuron the relevance associated with it is redistributed to its predecessors Considering neurons mapping a set of n inputs xii in 1n to the neuron activation xj through the sequence of functions
zij xi wij textstyle fracbjn
zj textstyle sum i zij
xj gzj
where for convenience the neuron bias bj has been distributed equally to each input neuron and where gcdot is a monotonously increasing activation function Denoting by Ri and Rj the relevance associated with xi and xj the relevance is redistributed from one layer to the other by defining messages Ri leftarrow j indicating how much relevance must be propagated from neuron xj to its input neuron xi in the lower layer These messages are defined as
Ri leftarrow j fraczij fracszjnsum i zij szj Rj
where szj epsilon cdot 1zj ge 0 1zj 0 is a stabilizing term that handles nearzero denominators with epsilon set to 001 The intuition behind this local relevance redistribution formula is that each input xi should be assigned relevance proportionally to its contribution in the forward pass in a way that the relevance is preserved sum i Ri leftarrow j Rj Each neuron in the lower layer receives relevance from all upperlevel neurons to which it contributes
Ri textstyle sum j Ri leftarrow j
This pooling ensures layerwise conservation sum i Ri sum j Rj Finally in a maxpooling layer all relevance at the output of the layer is redistributed to the pooled neuron with maximum activation ie winnertakeall An implementation of LRP can be found in BIBREF10 and downloaded from wwwheatmappingorg An alternative procedure called sensitivity analysis SA produces explanations by scoring input variables based on how they affect the decision output locally BIBREF11 BIBREF12 The sensitivity of an input variable is given by its squared partial derivative
Rd Big fracpartial fpartial xd Big 2
Here we note that unlike LRP sensitivity analysis does not preserve the function value fx but the squared l2 norm of the function gradient Vert nabla x fx Vert 22 textstyle sum d Rd Eq 6 This quantity is however not directly related to the amount of evidence for the category to detect Similar gradientbased analyses BIBREF7 BIBREF8 have been recently applied in the NLP domain and were also used by Simonyan in the context of image classification While recent work uses different relevance definitions for a group of input variables eg gradient magnitude in Denil2 or maxnorm of absolute value of simple derivatives in Simonyan in the present work unless otherwise stated we employ the squared l2 norm of gradients allowing for decomposition of Eq 6 as a sum over relevances of input variables For the following experiments we use the 20newsbydate version of the 20Newsgroups dataset consisting of 113147532 traintest documents evenly distributed among twenty finegrained categories As a document classifier we employ a wordbased CNN similar to Kim consisting of the following sequence of layers
texttt Conv xrightarrow texttt ReLU xrightarrow texttt 1MaxPool xrightarrow texttt FC
By 1MaxPool we denote a maxpooling layer where the pooling regions span the whole text length as introduced in BIBREF13 Conv ReLU and FC denote the convolutional layer rectified linear units activation and fullyconnected linear layer For building the CNN numerical input we concatenate horizontally 300dimensional pretrained word2vec vectors BIBREF14 in the same order the corresponding words appear in the preprocessed document and further keep this input representation fixed during training The convolutional operation we apply in the first neural network layer is onedimensional and along the text sequence direction ie along the horizontal direction The receptive field of the convolutional layer neurons spans the entire word embedding space in vertical direction and covers two consecutive words in horizontal direction The convolutional layer filter bank contains 800 filters As preprocessing we remove the document headers tokenize the text with NLTK filter out punctuation and numbers and finally truncate each document to the first 400 tokens We train the CNN by stochastic minibatch gradient descent with momentum with l2 norm penalty and dropout Our trained classifier achieves a classification accuracy of 8019 Due to our input representation applying LRP or SA to our neural classifier yields one relevance value per wordembedding dimension From these single input variable relevances to obtain wordlevel relevances we sum up the relevances over the word embedding space in case of LRP and unless otherwise stated take the squared l2 norm of the corresponding word gradient in case of SA More precisely given an input document d consisting of a sequence w1 w2 wN of N words each word being represented by a D dimensional word embedding we compute the relevance Rwt of the tmathrm th word in the input document through the summation Rwt sum i1D Ri t Eq 14 where Ri t denotes the relevance of the input variable corresponding to the imathrm th dimension of the tmathrm th word embedding obtained by LRP or SA as specified in Sections LayerWise Relevance Propagation Sensitivity Analysis In particular in case of SA the above word relevance can equivalently be expressed as Rmathrm SAwt Vert nabla wt fd Vert 22 Eq 15 where fd represents the classifiers prediction for document d Note that the resulting LRP word relevance is signed while the SA word relevance is positive In all experiments we use the term target class to identify the function fx to analyze in the relevance decomposition This function maps the neural network input to the neural network output variable corresponding to the target class In order to evaluate different relevance models we perform a sequence of word deletions hereby for deleting a word we simply set the wordvector to zero in the input document representation and track the impact of these deletions on the classification performance We carry out two deletion experiments starting either with the set of test documents that are initially classified correctly or with those that are initially classified wrongly We estimate the LRPSA word relevances using as target class the true document class Subsequently we delete words in decreasing resp increasing order of the obtained word relevances Fig 1 summarizes our results We find that LRP yields the best results in both deletion experiments Thereby we provide evidence that LRP positive relevance is targeted to words that support a classification decision while LRP negative relevance is tuned upon words that inhibit this decision In the first experiment the SA classification accuracy curve decreases significantly faster than the random curve representing the performance change when randomly deleting words indicating that SA is able to identify relevant words However the SA curve is clearly above the LRP curve indicating that LRP provides better explanations for the CNN predictions Similar results have been reported for image classification tasks BIBREF9 The second experiment indicates that the classification performance increases when deleting words with the lowest LRP relevance while small SA values points to words that have less influence on the classification performance than random word selection This result can partly be explained by the fact that in contrast to SA LRP provides signed explanations More generally the different quality of the explanations provided by SA and LRP can be attributed to their different objectives while LRP aims at decomposing the global amount of evidence for a class fx SA is build solely upon derivatives and as such describes the effect of local variations of the input variables on the classifier decision For a more detailed view of SA as well as an interpretation of the LRP propagation rules as a deep Taylor decomposition see MontavonArXiv15 Wordlevel relevances can be used for highlighting purposes In Fig 2 we provide such visualizations on one test document for different relevance target classes using either LRP or SA relevance models We can observe that while the word ride is highly negativerelevant for LRP when the target class is not recmotorcycles it is positively highlighted even though not heavily by SA This suggests that SA does not clearly discriminate between words speaking for or against a specific classifier decision while LRP is more discerning in this respect Word2vec embeddings are known to exhibit linear regularities representing semantic relationships between words BIBREF14 We explore if these regularities can be transferred to a document representation when using as a document vector a linear combination of word2vec embeddings As a weighting scheme we employ LRP or SA scores with the classifiers predicted class as the target class for the relevance estimation For comparison we perform uniform weighting where we simply sum up the word embeddings of the document words SUM For SA we use either the l2 norm or squared l2 norm for pooling word gradient values along the word2vec dimensions ie in addition to the standard SA word relevance defined in Eq 15 we use as an alternative Rmathrm SAl2wt Vert nabla wt fd Vert 2 and denote this relevance model by SA l2 For both LRP and SA we employ different variations of the weighting scheme More precisely given an input document d composed of the sequence w1 w2 wN of D dimensional word2vec embeddings we build new document representations dprime and dprime mathrm ew by either using wordlevel relevances Rwt as in Eq 14 or through elementwise multiplication of word embeddings with single input variable relevances Ri ti in 1D we recall that Ri t is the relevance of the input variable corresponding to the imathrm th dimension of the w1 w2 wN0 word in the input document w1 w2 wN1 More formally we use dprime sum t1N Rwt cdot wt Eq 23 or dprime mathrm ew sum t1N beginbmatrix R1 t R2 t vdots RD t endbmatrix odot wt Eq 24 where odot is an elementwise multiplication Finally we normalize the document vectors dprime resp dprime mathrm ew to unit l2 norm and perform a PCA projection In Fig 3 we label the resulting 2Dprojected test documents using five toplevel document categories For wordbased models dprime we observe that while standard SA and LRP both provide similar visualization quality the SA variant with simple l2 norm yields partly overlapping and dense clusters still all schemes are better than uniform weighting In case of SA note that even though the power to which word gradient norms are raised l2 or l22 affects the present visualization experiment it has no influence on the earlier described word deletion analysis For elementwise models dprime mathrm ew we observe slightly better separated clusters for SA and a clearcut cluster structure for LRP Through word deleting we quantitatively evaluated and compared two classifier explanation models and pinpointed LRP to be more effective than SA We investigated the application of wordlevel relevance information for document highlighting and visualization We derive from our empirical analysis that the superiority of LRP stems from the fact that it reliably not only links to determinant words that support a specific classification decision but further distinguishes within the preeminent words those that are opposed to that decision Future work would include applying LRP to other neural network architectures eg characterbased or recurrent models on further NLP tasks as well as exploring how relevance information could be taken into account to improve the classifiers training procedure or prediction performance This work was supported by the German Ministry for Education and Research as Berlin Big Data Center BBDC 01IS14013A and the Brain Korea 21 Plus Program through the National Research Foundation of Korea funded by the Ministry of Education </s>
<s>[INST] Hate Speech Detection on Vietnamese Social Media Text using the Bidirectional-LSTM Model [/INST] In recent years social networking has grown and become prevalent with every people it makes easy for people to interact and share with each other However every problem has two sides It also has some negative issues hate speech is a hot topic in the domain of social media With the freedom of speech on social networks and anonymity on the internet some people are free to comment on hate and insults Hate speech can have an adverse effect on human behavior as well as directly affect society We dont manually delete each of those comments which is timeconsuming and boring This spurs research to build an automated system that detects hate speech and eliminates them With that system we can detect and eliminate hate speech and thus reduce their spread on social media With Vietnamese we can use methods to apply specific extraction techniques manually and in combination with string labeling algorithms such as Conditional Random Field CRF1 Model Hidden Markov HMM2 or Entropy3 However we have to choose the features manually to bring the model with high accuracy Deep Neural Network architectures can handle the weaknesses of the above methods In this report we apply Bidirectional Long ShortTerm Memory BiLSTM to build the model Also combined with the word embedding matrix to increase the accuracy of the model The rest of the paper is organized as follows In section 2 we presented the related work In section 3 we described our BiLSTM system In sections 4 and 5 we presented the experimental process and results Finally section 6 gives conclusions about the work Gao and Huang 20174 used BiLSTMs with attention mechanism 372 to detect hate speech They illustrated that the Bidirectional LSTM model with attention mechanism achieves the high performance They hypothesize that this is because hate indicator phrases are often concentrated in a small region of a comment which is especially the case for long comments Davidson et al 20175 train a model to differentiate among three classes containing hate speech only offensive language or neitherJing Qian Mai ElSherief Elizabeth Belding William Yang Wang 2018 6 worked on classifying a tweet as racist sexist or neither by multiple deep learning architectures ABARUAH at SemEval2019 7 presented the results obtained using bidirectional long shortterm memory BiLSTM with and without attention and Logistic Regression LR models for multilingual detection of hate speech against immigrants and women in Twitter Animesh Koratana and Kevin Hu 8 use many machine learning models to detect toxic words in which the BiLstm model got the highest performance Malmasi and Zampieri 20179 made a similar study to compare the performance of different features in detecting hate speech As mentioned previously we propose a framework based on the ensemble of BiLSTM models to perform hate speech detection with the provided dataset Besides we also implemented some more models to compare and find the optimal model for the task LSTM takes words from an input sentence in a distributed word representation format LSTMs network architecture includes memory cells and ports that allow the storage or retrieval of information These gates help the LSTM memory cell to perform a write read and reset operation They enable the LSTM memory cell to store and access information over a period of time One drawback of LSTM architecture10 is that they are only considering the previous context However the identification of a word depends not only on the previous context but also on the subsequent context Bidirectional LSTM BiLSTM11 was created to overcome this weakness A BiLSTM architecture usually contains two single LSTM networks used simultaneously and independently to model input chains in two directions forward LSTM and backward LSTM The preprocessing performed on the text includes the following The comments were converted to lowercase The URLs mentions and nonalphabetic characters are removed number excess whitespace Several stopwords were removed from the comments We dont remove all stopword because having a few stopwords affect the results A few characters that dont affect the results are replaced by an empty string Using Tokenizer to convert each comment into a sequence of integers VLSP Shared Task 2019 Hate Speech Detection on Social Networks This dataset includes 25431 items in csv format the dataset was divided into two file training dataset with 20345 items and test dataset with 5086 items Each data line of training dataset is assigned 1 of 3 labels CLEAN OFFENSIVE or HATE The test dataset is not assign label The statistic summarization of the given training dataset is described in Table I Hate speech HATE contains the abusive language which often bears the purpose of insulting individuals or groups and can include hate speech derogatory and offensive language An item is identified as hate speech if it 1 targets individuals or groups on the basis of their characteristics 2 demonstrates a clear intention to incite harm or to promote hatred 3 may or may not use offensive or profane words Offensive but not hate speech OFFENSIVE is an item postscomments may contain offensive words but it does not target individuals or groups on the basis of their characteristics Neither offensive nor hate speech CLEAN is a normal item Its conversations expressing emotions normally It does not contain offensive language or hate speech In this paper we use two different word embeddings to compare and find out the best word embedding such as Word2Vec 17 and FastText 16 We used pretrained vector with large dimensions to increase the accuracy of the model Through experiments we found FastText achieved better results For this public dataset we find that the dataset is an unbalanced dataset The CLEAN label has the highest rate with 9149 and the HATE label is lowest with 349 Therefore it is difficult and challenging to find a good model for this task For problems of this type there are many models suitable to handle such as SVM BiLTSM LR GRU CNN and etc To solve this problem we implement four different models SVM LR BiLSTM and GRU to compare and find the most suitable one To evaluate the four models on this task we divide the training dataset into two parts training testing rate of 80 20 respectively The details of our models are provided below 1 Support Vector Machine SVM Support Vector Machines SVMs are a popular machine learning method for classification regression and other learning tasks 13 It is often used for twoclass classification problems For this problem it has three labels so we use the SVM to classify twice two label at a timeFirstly we classify two label 0 and 1 we achieved accuracy precision recall and F1score rates of 9600 9337 9896 and 9608 respectively on training dataset Second time we classify two label 1 and 2 we achieved accuracy precision recall and F1score rates of 8434 8738 7886 and 8290 respectively We find that this model doesnt classify well for two labels 1 and 2 Moreover when we check this model with the publictest it brings the result as not good as we expected with 6387 2 Logistic Regression LR Logistic regression is basically a supervised classification algorithm In a classification problem the target variableor output can take only discrete values for a given set of featuresor inputs 14 We have applied it to this problem as follows Firstly we use the TfidfVectorizer tool to convert text into feature vectors that are used as input for the model Then we used the Logistic Regression model to predict the classification results When checking it on training datasets we achieved accuracy precision recall and F1score rates of 9417 8887 5554 and 6415 respectively We also try submitting this models result on the system the result is worse we thought with 5115 3 Gated Recurrent Units GRU The Recurrent Neural Network RNN handles the variablelength sequence by having a recurrent hidden state whose activation at each time is dependent on that of the previous time 15 The GRU is a variant of RNN and it only has two inputs We have used it into this problem as follows We have used it with word embeddings Fasttext 16 First we use Tokenizer for sequences because GRU is good at processing long sequences Then we have applied this model to the problem We achieved accuracy precision recall and F1score rates of 9461 6712 5966 and 6415 respectively on the training dataset When we check this model with the public dataset it brings the result quite good with 6501 4 Bidirectional Long ShortTerm Memory BiLSTM The LSTM is a famous variant of RNN The Bidirectional Long Short Term Memory can be trained using all available input information in the past and future of a specific time frame We have applied it with word embeddings Fasttext 16 and baomoivnmodeltxt 17 As follows As well as the GRU we also used GloVe Embedding for sequences and applied this model for the problem When using word embeddings baomoivnmodeltxt 17 we achieved accuracy precision recall and F1score rates of 9326 9074 5030 and 5362 respectively on the training dataset The same with word embeddings Fasttext 16 we achieved accuracy precision recall and F1score rates of 9567 8561 6736 and 7384 respectively on the training dataset We find that when combining the BiLSTM with fasttext will bring the result better When we check it with the public dataset it brings the result good with 7143 After conducting experiments on many models we obtained the following results on publictest shown in Table TABREF16 We achieved the best result with BiLSTM ranking the 2nd of the scoreboard on the publictest set shown in Table TABREF17 However our result ranks the 6th of the scoreboard on the privatetest set In this paper we have presented our approach to address Vietnamese hate speech detection task proposed at the VLSP Shared Task 2019 We develop the system using Bidirectional Long Short Memory for classifying three different labels in this task We participate in this and evaluate the performance of our system on this dataset As a result our result is 7143 of F1score ranking the 2nd of the scoreboard on the publictest set In the future work we plan to address this problem in different ways to enhance the performance of this task We will investigate experiments both in traditional machine learning and types of deep learning for this problem In addition we also analyze experimental results on this task to choose the efficient approach such as the hybrid approach which combines machine learning and rulebased approaches to boost the result of detecting hate speech on Vietnamese social media text We would like to thank the VLSP Shared Task 2019 organizers for their really hard work and providing the dataset of Vietnamese Hate Speech Detection on social networks for our experiments </s>
<s>[INST] Stochastic Answer Networks for Machine Reading Comprehension [/INST] Machine reading comprehension MRC is a challenging task the goal is to have machines read a text passage and then answer any question about the passage This task is an useful benchmark to demonstrate natural language understanding and also has important applications in eg conversational agents and customer service support It has been hypothesized that difficult MRC problems require some form of multistep synthesis and reasoning For instance the following example from the MRC dataset SQuAD BIBREF0 illustrates the need for synthesis of information across sentences and multiple steps of reasoning Q What collection does the VA Theator Performance galleries hold P The VA Theator Performance galleries opened in March 2009 They hold the UKs biggest national collection of material about live performance To infer the answer the underlined portion of the passage P the model needs to first perform coreference resolution so that it knows They refers VA Theator then extract the subspan in the direct object corresponding to the answer This kind of iterative process can be viewed as a form of multistep reasoning Several recent MRC models have embraced this kind of multistep strategy where predictions are generated after making multiple passes through the same text and integrating intermediate information in the process The first models employed a predetermined fixed number of steps BIBREF1 BIBREF2 BIBREF3 BIBREF4 Later shen2016reasonet proposed using reinforcement learning to dynamically determine the number of steps based on the complexity of the question Further shen2017empirical empirically showed that dynamic multistep reasoning outperforms fixed multistep reasoning which in turn outperforms singlestep reasoning on two distinct MRC datasets SQuAD and MS MARCO In this work we derive an alternative multistep reasoning neural network for MRC During training we fix the number of reasoning steps but perform stochastic dropout on the answer module final layer predictions During decoding we generate answers based on the average of predictions in all steps rather than the final step We call this a stochastic answer network SAN because the stochastic dropout is applied to the answer module albeit simple this technique significantly improves the robustness and overall accuracy of the model Intuitively this works because while the model successively refines its prediction over multiple steps each step is still trained to generate the same answer we are performing a kind of stochastic ensemble over the models successive prediction refinements Stochastic prediction dropout is illustrated in Figure 1 The machine reading comprehension MRC task as defined here involves a question Qlbrace q0 q1 qm1rbrace and a passage Plbrace p0 p1 pn1rbrace and aims to find an answer span Albrace astart aendrbrace in P We assume that the answer exists in the passage P as a contiguous text string Here m and n denote the number of tokens in Q and P respectively The learning algorithm for reading comprehension is to learn a function fQ P rightarrow A The training data is a set of the query passage and answer tuples Plbrace p0 p1 pn1rbrace 0 We now describe our model from the ground up The main contribution of this work is the answer module but in order to understand what goes into this module we will start by describing how Q and P are processed by the lower layers Note the lower layers also have some novel variations that are not used in previous work As shown in Figure 2 our model contains four different layers to capture different concept of representations The detailed description of our model is provided as follows Lexicon Encoding Layer The purpose of the first layer is to extract information from Q and P at the word level and normalize for lexical variants A typical technique to obtain lexicon embedding is concatenation of its word embedding with other linguistic embedding such as those derived from PartOfSpeech POS tags For word embeddings we use the pretrained 300dimensional GloVe vectors BIBREF5 for the both Q and P Following chen2017reading we use three additional types of linguistic features for each token pi in the passage P In summary each token pi in the passage is represented as a 600dimensional vector and each token qj is represented as a 300dimensional vector Due to different dimensions for the passages and questions in the next layer two different bidirectional LSTM BiLSTM BIBREF6 may be required to encode the contextual information This however introduces a large number of parameters To prevent this we employ an idea inspired by BIBREF7 use two separate twolayer positionwise FeedForward Networks FFN FFNxW2 ReLUW1 x b1 b2 to map both the passage and question lexical encodings into the same number of dimensions Note that this FFN has fewer parameters compared to a BiLSTM Thus we obtain the final lexicon embeddings for the tokens in Q as a matrix Eq in mathbb Rd times m and tokens in P as Epin mathbb Rd times n Contextual Encoding Layer Both passage and question use a shared twolayers BiLSTM as the contextual encoding layer which projects the lexicon embeddings to contextual embeddings We concatenate a pretrained 600dimensional CoVe vectors BIBREF8 trained on GermanEnglish machine translation dataset with the aforementioned lexicon embeddings as the final input of the contextual encoding layer and also with the output of the first contextual encoding layer as the input of its second encoding layer To reduce the parameter size we use a maxout layer BIBREF9 at each BiLSTM layer to shrink its dimension By a concatenation of the outputs of two BiLSTM layers we obtain Hqin mathbb R2d times m as representation of Q and Hpin mathbb R2d times n as representation of P where d is the hidden size of the BiLSTM Memory Generation Layer In the memory generation layer We construct the working memory a summary of information from both Q and P First a dotproduct attention is adopted like in BIBREF7 to measure the similarity between the tokens in Q and P Instead of using a scalar to normalize the scores as in BIBREF7 we use one layer network to transform the contextual information of both Q and P CdropoutfattentionhatHq hatHp in mathbb Rm times n Eq 8 C is an attention matrix Note that hatHq and hatHp is transformed from Hq and Hp by one layer neural network ReLUW3x respectively Next we gather all the information on passages by a simple concatenation of its contextual information Hp and its questionaware representation Hq cdot C Up concatHp HqC in mathbb R4d times n Eq 9 Typically a passage may contain hundred of tokens making it hard to learn the long dependencies within it Inspired by BIBREF10 we apply a selfattended layer to rearrange the information Up as hatUp Up dropdiagfattentionUp Up Eq 10 In other words we first obtain an n times n attention matrix with Up onto itself apply dropout then multiply this matrix with Up to obtain an updated hatUp Instead of using a penalization term as in BIBREF10 we dropout the diagonal of the similarity matrix forcing each token in the passage to align to other tokens rather than itself At last the working memory is generated by using another BiLSTM based on all the information gathered MBiLSTMUp hatUp Eq 11 where the semicolon mark indicates the vectormatrix concatenation operator Answer module There is a Chinese proverb that says wisdom of masses exceeds that of any individual Unlike other multistep reasoning models which only uses a single output either at the last step or some dynamically determined final step our answer module employs all the outputs of multiple step reasoning Intuitively by applying dropout it avoids a step bias problem where models places too much emphasis one particular steps predictions and forces the model to produce good predictions at every individual step Further during decoding we reuse wisdom of masses instead of individual to achieve a better result We call this method stochastic prediction dropout because dropout is being applied to the final predictive distributions Formally our answer module will compute over T memory steps and output the answer span This module is a memory network and has some similarities to other multistep reasoning networks namely it maintains a state vector one state per step At the beginning the initial state s0 is the summary of the Q s0sum j alpha j Hqj where alpha j fracexpw4 cdot Hqjsum jprime expw4 cdot Hqjprime At time step t in the range of lbrace 1 2 T1rbrace the state is defined by st GRUst1 xt Here xt is computed from the previous state st1 and memory s00 s01 and s02 Finally a bilinear function is used to find the begin and end point of answer spans at each reasoning step s03 Ptbegin softmaxstW6M Eq 12 Ptend softmaxst sum j PtjbeginMjW7M Eq 13 From a pair of begin and end points the answer string can be extracted from the passage However rather than output the results startend points from the final step which is fixed at T1 as in Memory Networks or dynamically determined as in ReasoNet we utilize all of the T outputs by averaging the scores Pbegin avgP0begin P1begin PT1begin Eq 14 Pend avgP0end P1end PT1end Eq 15 Each Ptbegin or Ptend is a multinomial distribution over lbrace 1ldots nrbrace so the average distribution is straightforward to compute During training we apply stochastic dropout to before the above averaging operation For example as illustrated in Figure 1 we randomly delete several steps predictions in Equations 14 and 15 so that Pbegin might be avgP1begin P3begin and Pend might be avgP0end P3end P4end The use of averaged predictions and dropout during training improves robustness Our stochastic prediction dropout is similar in motivation to the dropout introduced by BIBREF11 The difference is that theirs is dropout at the intermediate nodelevel whereas ours is dropout at the final layerlevel Dropout at the nodelevel prevents correlation between features Dropout at the final layer level where randomness is introduced to the averaging of predictions prevents our model from relying exclusively on a particular step to generate correct output We used a dropout rate of 04 in experiments Dataset We evaluate on the Stanford Question Answering Dataset SQuAD BIBREF0 This contains about 23K passages and 100K questions The passages come from approximately 500 Wikipedia articles and the questions and answers are obtained by crowdsourcing The crowdsourced workers are asked to read a passage a paragraph come up with questions then mark the answer span All results are on the official development set unless otherwise noted Two evaluation metrics are used Exact Match EM which measures the percentage of span predictions that matched any one of the ground truth answer exactly and Macroaveraged F1 score which measures the average overlap between the prediction and the ground truth answer Implementation details The spaCy tool is used to tokenize the both passages and questions and generate lemma partofspeech and named entity tags We use 2layer BiLSTM with d128 hidden units for both passage and question encoding The minibatch size is set to 32 and Adamax BIBREF12 is used as our optimizer The learning rate is set to 0002 at first and decreased by half after every 10 epochs We set the dropout rate for all the hidden units of LSTM and the answer module output layer to 04 To prevent degenerate output we ensure that at least one step in the answer module is active during training The main experimental question we would like to answer is whether the stochastic dropout and averaging in the answer module is an effective technique for multistep reasoning To do so we fixed all lower layers and compared different architectures for the answer module The main results in terms of EM and F1 are shown in Table 1 We observe that SAN achieves 76235 EM and 84056 F1 outperforming all other models Standard 1step model only achieves 75139 EM and dynamic steps via ReasoNet achieves only 75355 EM SAN also outperforms a 5step memory net with averaging which implies averaging predictions is not the only thing that led to SANs superior results indeed stochastic prediction dropout is an effective technique The Kbest oracle results is shown in Figure 3 The Kbest spans are computed by ordering the spans according the their probabilities Pbegin times Pend We limit K in the range 1 to 4 and then pick the span with the best EM or F1 as oracle SAN also outperforms the other models in terms of Kbest oracle scores Impressively these models achieve human performance at K2 for EM and K3 for F1 Finally we compare our results with other top models in Table 2 Note that all the results in Table 2 are taken from the published papers We see that SAN is very competitive in both single and ensemble settings ranked in second despite its simplicity Note that the bestperforming model BIBREF14 used a largescale language model as an extra contextual embedding which gave a significant improvement 43 dev F1 We expect significant improvements if we add this to SAN in future work We are interested in whether the proposed model is sensitive to different random initial conditions Table 3 shows the development set scores of SAN trained from initialization with different random seeds We observe that the SAN results are consistently strong regardless of the 10 different initializations For example the mean EM score is 76131 and the lowest EM score is 75922 both of which still outperform the 75355 EM of the Dynamic step ReasoNet in Table 1 We are also interested in how sensitive are the results to the number of reasoning steps which is a fixed hyperparameter Since we are using dropout a natural question is whether we can extend the number of steps to an extremely large number Table 4 shows the development set scores for T1 to T10 We observe that there is a gradual improvement as we increase T1 to T5 but after 5 steps the improvements have saturated In fact the EMF1 scores drop slightly but considering that the random initialization results in Table 3 show a standard deviation of 0142 and a spread of 0426 for EM we believe that the T10 result does not statistically differ from the T5 result In summary we think it is useful to perform some approximate hyperparameter tuning for the number of steps but it is not necessary to find the exact optimal value Finally we test SAN on two Adversarial SQuAD datasets AddSent and AddOneSent BIBREF22 where the passages contain autogenerated adversarial distracting sentences to fool computer systems that are developed to answer questions about the passages For example AddSent is constructed by adding sentences that look similar to the question but do not actually contradict the correct answer AddOneSent is constructed by appending a random humanapproved sentence to the passage We evaluate the single SAN model ie the one presented in Table 2 on both AddSent and AddOneSent The results in Table 5 show that SAN achieves the new stateoftheart performance and SANs superior result is mainly attributed to the multistep answer module which leads to significant improvement in F1 score over the Standard 1step answer module ie 12 on AddSent and 07 on AddOneSent For practical deployment scenarios prediction speed at test time is an important criterion Therefore one question is whether SAN can train with eg T5 steps but test with T1 steps Table 6 shows the results of a SAN trained on T5 steps but tested with different number of steps As expected the results are best when T matches during training and test however it is important to note that small numbers of steps T1 and T2 nevertheless achieve strong results For example prediction at T1 achieves 7558 which outperforms a standard 1step model 7514 EM as in Table 1 that has approximate equivalent prediction time The average training time per epoch is comparable our implementation running on a GTX Titan X is 22 minutes for 5step memory net 30 minutes for ReasoNet and 24 minutes for SAN The learning curve is shown in Figure 4 We observe that all systems improve at approximately the same rate up to 10 or 15 epochs However SAN continues to improve afterwards as other models start to saturate This observation is consistent with previous works using dropout BIBREF11 We believe that while training time per epoch is similar between SAN and other models it is recommended to train SAN for more epochs in order to achieve gains in EMF1 To see whether SAN performs well on a particular type of question we divided the development set by questions type based on their respective Whword such as who and where The score breakdown by F1 is shown in Figure 5 We observe that SAN seems to outperform other models uniformly across all types The only exception is the Why questions but there is too little data to derive strong conclusions MS MARCO BIBREF27 is a large scale realword RC dataset which contains 100100 100K queries collected from anonymized user logs from the Bing search engine The characteristic of MS MARCO is that all the questions are real user queries and passages are extracted from real web documents For each query approximate 10 passages are extracted from public web documents The answers are generated by humans The data is partitioned into a 82430 training a 10047 development and 9650 test tuples The evaluation metrics are BLEU BIBREF28 and ROUGEL BIBREF29 due to its freeform text answer style To apply the same RC model we search for a span in MS MARCOs passages that maximizes the ROUGEL score with the raw freeform answer It has an upper bound of 9345 BLEU and 9382 ROUGEL on the development set The MS MARCO dataset contains multiple passages per query Our model as shown in Figure 2 is developed to generate answer from a single passage Thus we need to extend it to handle multiple passages Following BIBREF13 we take two steps to generate an answer to a query Q from J passages P1 PJ First we run SAN on every Pj Q pair generating J candidate answer spans one from each passage Then we multiply the SAN score of each candidate answer span with its relevance score rPj Q assigned by a passage ranker and output the span with the maximum score as the answer In our experiments we use the passage ranker described in BIBREF30 The ranker is trained on the same MS MARCO training data and achieves 371 p1 on the development set The results in Table 7 show that SAN outperforms VNet BIBREF31 and becomes the new state of the art The recent big progress on MRC is largely due to the availability of the largescale datasets BIBREF0 BIBREF27 BIBREF32 BIBREF1 since it is possible to train large endtoend neural network models In spite of the variety of model structures and attenion types BIBREF33 BIBREF34 BIBREF35 BIBREF21 BIBREF13 BIBREF19 a typical neural network MRC model first maps the symbolic representation of the documents and questions into a neural space then search answers on top of it We categorize these models into two groups based on the difference of the answer module singlestep and multistep reasoning The key difference between the two is what strategies are applied to search the final answers in the neural space A singlestep model matches the question and document only once and produce the final answers It is simple yet efficient and can be trained using the classical backpropagation algorithm thus it is adopted by most systems BIBREF34 BIBREF21 BIBREF19 BIBREF18 BIBREF36 BIBREF37 BIBREF17 However since humans often solve question answering tasks by rereading and redigesting the document multiple times before reaching the final answers this may be based on the complexity of the questionsdocuments it is natural to devise an iterative way to find answers as multistep reasoning Pioneered by BIBREF1 BIBREF2 BIBREF3 BIBREF4 who used a predetermined fixed number of reasoning steps Shen et al shen2016reasonet shen2017empirical showed that multistep reasoning outperforms singlestep ones and dynamic multistep reasoning further outperforms the fixed multistep ones on two distinct MRC datasets SQuAD and MS MARCO But these models have to be trained using reinforcement learning methods eg policy gradient which are tricky to implement due to the instability issue Our model is different in that we fix the number of reasoning steps but perform stochastic dropout to prevent step bias Further our model can also be trained by using the backpropagation algorithm which is simple and yet efficient We introduce Stochastic Answer Networks SAN a simple yet robust model for machine reading comprehension The use of stochastic dropout in training and averaging in test at the answer module leads to robust improvements on SQuAD outperforming both fixed step memory networks and dynamic step ReasoNet We further empirically analyze the properties of SAN in detail The model achieves results competitive with the stateoftheart on the SQuAD leaderboard as well as on the Adversarial SQuAD and MS MARCO datasets Due to the strong connection between the proposed model with memory networks and ReasoNet we would like to delve into the theoretical link between these models and its training algorithms Further we also would like to explore SAN on other tasks such as text classification and natural language inference for its generalization in the future We thank Pengcheng He Yu Wang and Xinying Song for help to set up dockers We also thank Pranav Samir Rajpurkar for help on SQuAD evaluations and the anonymous reviewers for valuable discussions and comments </s>
<s>[INST] Grounded Agreement Games: Emphasizing Conversational Grounding in Visual Dialogue Settings [/INST] If youre good at replying to a single request are you also likely to be good at doing dialogue Much current work seems to assume that the answer to this question is yes in that it attempts a scaling up from single pairs of utterance plus response to longer dialogues See eg the work on neural chatbots following on from BIBREF0 where the main evaluation metric is next utterance retrieval and on visual dialogue BIBREF1 which views itself as a natural extension of visual question answering BIBREF2 If you assume however that dialogue crucially is a joint project between its participants in a way that single exchanges are not youre likely to put more focus on coordination phenomena BIBREF3 but may end up with settings that combine multiple language capabilities in ways that current methods cannot yet seem to capture See for example the dialogues collected in BIBREF4 In this short paper we contribute a type of setting that introduces such coordination phenomena while still allowing for control of the complexity of the resulting interaction Figure FIGREF2 shows an example interaction with the original Visual Dialogue system BIBREF1 The competence of the system is impressive from a multimodal grounding perspective it gets right several questions aiming at different aspects of the image It is also clear however that this is a modest step beyond singleshot visual question answering BIBREF2 It seems that here the human questioner is doing all the work of keeping the dialogue alive and there is little that suggest that the answerer is keeping any state about the dialogue Later work by BIBREF6 on the visual dialogue dataset BIBREF1 indeed identified coreference in the questions as the main issue that distinguishes this setting from oneshot question answering One shortcoming of this settingthat the questioner was not provided with a good reason for why they are asking questions in the first placewas addressed in some related work In the GuessWhat setting introduced by BIBREF7 at around the same time as Visual Dialogue a questioner is asking polar questions about an image with the goal of identifying an object known only to the questioner In the ALICE variant of the visual dialogue setting the questioner is asking questions with the later goal in mind of identifying the image from a set also containing distractor images BIBREF8 These variants of the general setting provide purpose to the questioner but not to the answerer which is the target of the modelling effort and crucially it does not give the dialogue a joint purpose a shared sense of semantic ownership of the interaction which is a central feature of most genres of human interaction BIBREF3 Coming back to the visual dialogue setting it can be assumed that the crowd workers that created the original data did try to orient themselves to the usual maxims that govern conversational behaviour However being constrained by the rigid roles of questioner and answerer and with the perceptual task being so easy for them a need for dealing with miscommunication never arose for them and hence no such strategies can be learned from that data That this is missing from the resulting agents can easily been shown in cases where something goes wrong but normal repair mechanisms BIBREF9 are not available as in the example interaction we created shown in Figure FIGREF5 In Herbert Clarks BIBREF3 model of dialogue the mutual need for ensuring understandingsufficient to current purposesis the main structuring force in dialogue As a metaphor for this interaction management process Clark uses the notion of a secondary track on which the constant negotiation of this understanding happens This can be done through quite subtle methods such as simply producing a continuation that displays through its fit an understanding of the previous utterance as well as through specially designed markers such as feedback utterances like uhu This model has been influential in the design of spoken dialogue systems BIBREF13 BIBREF14 where it has been incorporated in the design of dialogue state update rules In the datadriven era however it seems to have become less well known and many datasets almost appear as to be designed in such a way as to limit the possibility of grounding interactions The idea behind our setting of agreement games is to make this secondary track more prominent and hence more easy to pick up from the data by making reaching mutual understanding on the answer to the game question an explicit goal Or in Clarks term the grounding criterion for answering the question is raised so as to make reaching mutual understanding on it an explicit rather than as normally implicit goal The representational challenge is that it is an abstract objectthe understanding of the discoursethat is jointly constructed purely through verbal actions An Agreement Game is a dialogue game with two regular participants mathcal P lbrace P1 P2rbrace and a disinterested third participant N for Nature N poses a question Q to the players mathcal P and provides them with information I required to answer the question possibly split up over the players If I contains visual information we call the game a Grounded Agreement Game The players can exchange messages in an unrestricted way The game ends when one of the players explicitly proposes an answer A and the other player explicity agrees with the proposal As the answer A will be based on a construal of I the agreement on A is also an agreement on that construal Optionally a reward can be given to the players after they have provided their joint answer tied to some measure of quality of A We illustrate the concept by discussing some instantiations that we have recently experimented with In the MeetUp game BIBREF17 BIBREF18 the two participants are presented with an environment through which they can separately navigate and which is represented to static photographs or real indoor scenes eg a picture of a bedroom a kitchen etc Their goal is to meet up in the same room of a type previously told to them Eg they might be told at the start of the game your goal is to meet up in a room of type kitchen As the positions of the players are not represented graphically the only way they can be sure of whether they have reached that goal is by conversing via chat messages Once they have come to the conclusion that they are in the same room they can end the game by each sending a special signal If they are indeed in such a winning constellation they will receive a bonus Unlike the Visual Dialogue setting discussed above this setting ensures informational symmetry between the participants both have access to the same type of information but not the same information as they cant see each other More importantly however the constraint that the game only ends if they both agree ensures a committment symmetry where the success of the game must be ensured by both participants The design also provides for a clear relevance place at which an opportunity arises for semantic negotiation namely before the final decision is made An example of this is shown in the example below The number in the parentheses indicate the time relative to the beginning of the interaction when the utterance was made B 000034 okay I think Im there if I understand utility room B 000042 It has a washer and dryer A 000046 I was wondering too This is sorta like a laundry room A 000055 This has pet bowl on the floor below a window B 000100 ok let us keep looking A 000122 And a small kids looking suit hanging on the wall And a big banner above the window B 000133 Are you saying a utility room is like a laundry room B 000200 let me find you A 000207 Google says a room equipped with appliances for washing and other domestic work A 000209 So I think so The MatchIt Game Ilinykh et al forthcoming is a yet further simplified visual game Here the goal simply is to decide whether you and your partner are both looking at the same image of the same genre as in MeetUp In that sense it is a reduction of the MeetUP game to the final stage taking out the navigation aspect As example SECREF12 shows this can similarly lead to metasemantic interaction where classifications are revised As SECREF12 shows even in cases where a decision can be reached quickly there can be an explicit mutual confirmation step before the silent decision signal is sent B 000025 white kitchen A 000025 im in a bathroom B 000028 ah B 000032 well wait B 000038 there is something that looks like a big bath B 000044 is it all white A 000054 yes its white and I see a bit of a kitchen A 000111 yes B 000111 are you sure its a bathroom lol A 000116 no its not a bathroom haha A 000024 i see stairs B 000025 I see a staircase with a bike with wicker basket at the bottom of the staircase B 000031 do you have a bike A 000039 no bike B 000046 okayit is different A 000054 yes A third setting that we have explored BIBREF19 brings conceptual negotiation more clearly into the foreground In that game the players are presented with images of birds of particular species and are tasked with coming up with a description of common properties Again the final answer has to be approved by both participants As SECREF13 shows this can lead to an explicit negotiation of conceptual content We have argued that some prominent current dialogue settings lack room for the occurence of coordination phenomena prevalent in natural dialogue We have shown a simple condition that brings out the need for coordination in an explicit way but still can easily be added to controlled and controllable dialogue settings </s>
<s>[INST] Natural Language Processing with Small Feed-Forward Networks [/INST] Deep and recurrent neural networks with large network capacity have become increasingly accurate for challenging language processing tasks For example machine translation models have been able to attain impressive accuracies with models that use hundreds of millions BIBREF0 BIBREF1 or billions BIBREF2 of parameters These models however may not be feasible in all computational settings In particular models running on mobile devices are often constrained in terms of memory and computation Long ShortTerm Memory LSTM models BIBREF3 have achieved good results with small memory footprints by using characterbased input representations eg the partofspeech tagging models of gillick2016brnn have only roughly 900000 parameters Latency however can still be an issue with LSTMs due to the large number of matrix multiplications they require eight per LSTM cell kimrush2016 report speeds of only 88 wordssecond when running a twolayer LSTM translation system on an Android phone Feedforward neural networks have the potential to be much faster In this paper we show that small feedforward networks can achieve results at or near the stateoftheart on a variety of natural language processing tasks with an order of magnitude speedup over an LSTMbased approach We begin by introducing the network model structure and the characterbased representations we use throughout all tasks Small FeedForward Network Models The four tasks that we address are language identification LangID partofspeech POS tagging word segmentation and preordering for translation In order to use feedforward networks for structured prediction tasks we use transition systems BIBREF4 BIBREF5 with feature embeddings as proposed by chenmanning2014EMNLP and introduce two novel transition systems for the last two tasks We focus on budgeted models and ablate four techniques one on each task for improving accuracy for a given memory budget We achieve results at or near stateoftheart with small 3 MB models on all four tasks The network architectures are designed to limit the memory and runtime of the model Figure 1 illustrates the model architecture Memory needs are dominated by the embedding matrix sizes sum g Vg Dg where Vg and Dg are the vocabulary sizes and dimensions respectively for each feature group g while runtime is strongly influenced by the hidden layer dimensions We experiment with small feedforward networks for four diverse NLP tasks language identification partofspeech tagging word segmentation and preordering for statistical machine translation Recent shared tasks on codeswitching BIBREF14 and dialects BIBREF15 have generated renewed interest in language identification We restrict our focus to single language identification across diverse languages and compare to the work of baldwin2010language on predicting the language of Wikipedia text in 66 languages For this task we obtain the input mathbf h0 by separately averaging the embeddings for each ngram length N14 as summation did not produce good results Table 1 shows that we outperform the lowmemory nearestprototype model of baldwin2010language Their nearest neighbor model is the most accurate but its memory scales linearly with the size of the training data Moreover we can apply quantization to the embedding matrix without hurting prediction accuracy it is better to use less precision for each dimension but to use more dimensions Our subsequent models all use quantization There is no noticeable variation in processing speed when performing dequantization onthefly at inference time Our 16dim LangID model runs at 4450 documentssecond 56 MB of text per second on the preprocessed Wikipedia dataset These techniques back the opensource Compact Language Detector v3 CLD3 that runs in Google Chrome browsers Our experimental LangID model uses the same overall architecture as CLD3 but uses a simpler feature set less involved preprocessing and covers fewer languages We apply our model as an unstructured classifier to predict a POS tag for each token independently and compare its performance to that of the bytetospan BTS model BIBREF16 BTS is a 4layer LSTM network that maps a sequence of bytes to a sequence of labeled spans such as tokens and their POS tags Both approaches limit model size by using small input vocabularies byte values in the case of BTS and hashed character ngrams and optionally cluster ids in our case It is well known that word clusters can be powerful features in linear models for a variety of tasks BIBREF17 BIBREF18 Here we show that they can also be useful in neural network models However naively introducing word cluster features drastically increases the amount of memory required as a wordtocluster mapping file with hundreds of thousands of entries can be several megabytes on its own By representing word clusters with a Bloom map BIBREF19 a keyvalue based generalization of Bloom filters we can reduce the space required by a factor of sim 15 and use 300KB to approximately represent the clusters for 250000 word types In order to compare against the monolingual setting of gillick2016brnn we train models for the same set of 13 languages from the Universal Dependency treebanks v11 BIBREF20 corpus using the standard predefined splits As shown in Table 2 our best models are 03 more accuate on average across all languages than the BTS monolingual models while using 6x fewer parameters and 36x fewer FLOPs The cluster features play an important role providing a 15 relative reduction in error over our vanilla model but also increase the overall size Halving all feature embedding dimensions except for the cluster features still gives a 12 reduction in error and trims the overall size back to 11x the vanilla model staying well under 1MB in total This halved model configuration has a throughput of 46k tokenssecond on average Two potential advantages of BTS are that it does not require tokenized input and has a more accurate multilingual version achieving 9585 accuracy From a memory perspective one multilingual BTS model will take less space than separate FF models However from a runtime perspective a pipeline of our models doing language identification word segmentation and then POS tagging would still be faster than a single instance of the deep LSTM BTS model by about 12x in our FLOPs estimate Word segmentation is critical for processing Asian languages where words are not explicitly separated by spaces Recently neural networks have significantly improved segmentation accuracy BIBREF21 BIBREF22 BIBREF23 BIBREF24 BIBREF25 We use a structured model based on the transition system in Table 3 and similar to the one proposed by zhangclark2007 We conduct the segmentation experiments on the Chinese Treebank 60 with the recommended data splits No external resources or pretrained embeddings are used Hashing was detrimental to quality in our preliminary experiments hence we do not use it for this task To learn an embedding for unknown characters we cast characters occurring only once in the training set to a special symbol Because we are not using hashing here we need to be careful about the size of the input vocabulary The neural network with its nonlinearity is in theory able to learn bigrams by conjoining unigrams but it has been shown that explicitly using character bigram features leads to better accuracy BIBREF21 BIBREF26 zhangzhangfu2016P161 suggests that embedding manually specified feature conjunctions further improves accuracy zhangzhangfu2016P161combo in Table 4 However such embeddings could easily lead to a model size explosion and thus are not considered in this work The results in Table 4 show that spending our memory budget on small bigram embeddings is more effective than on larger character embeddings in terms of both accuracy and model size Our model featuring bigrams runs at 110KB of text per second or 39k tokenssecond Preordering sourceside words into the targetside word order is a useful preprocessing task for statistical machine translation BIBREF27 BIBREF28 BIBREF29 BIBREF30 We propose a novel transition system for this task Table 5 so that we can repeatedly apply a small network to produce these permutations Inspired by a nonprojective parsing transition system BIBREF31 the system uses a swap action to permute spans The system is sound for permutations any derivation will end with all of the input words in a permuted order and complete all permutations are reachable use shift and swap operations to perform a bubble sort then append n1 times to form a single span For training and evaluation we use the EnglishJapanese manual word alignments from nakagawa2015efficient For preordering we experiment with either spending all of our memory budget on reordering or spending some of the memory budget on features over predicted POS tags which also requires an additional neural network to predict these tags Full feature templates are in the supplementary material As the POS tagger network uses features based on a three word window around the token another possibility is to add all of the features that would have affected the POS tag of a token to the reorderer directly Table 6 shows results with or without using the predicted POS tags in the preorderer as well as including the features used by the tagger in the reorderer directly and only training the downstream task The preorderer that includes a separate network for POS tagging and then extracts features over the predicted tags is more accurate and smaller than the model that includes all the features that contribute to a POS tag in the reorderer directly This pipeline processes 7k tokenssecond when taking pretokenized text as input with the POS tagger accounting for 23 of the computation time This paper shows that small feedforward networks are sufficient to achieve useful accuracies on a variety of tasks In resourceconstrained environments speed and memory are important metrics to optimize as well as accuracies While large and deep recurrent models are likely to be the most accurate whenever they can be afforded feedfoward networks can provide better value in terms of runtime and memory and should be considered a strong baseline We thank Kuzman Ganchev Fernando Pereira and the anonymous reviewers for their useful comments The values comprising a generic embedding matrix mathbf E in mathbb RV times D are ordinarily stored with 32bit floatingpoint precision in our implementation For quantization we first calculate a scale factor si for each embedding vector mathbf ei as sifrac1b1max j lefteijright Each weight eij is then quantized into an 8bit integer as qijlfloor frac12 fraceijsi brfloor where the bias b128 Hence the number of bits required to store the embedding matrix is reduced by a factor of 4 in exchange for storing the V additional scale values At inference time the embeddings are dequantized onthefly The product of mathbf A in mathbb RP times Q and mathbf b in mathbb RQ involves P2Q1 FLOPs and our single ReLu hidden layer requires performing this operation once per timestep P M Q H0 Here H0 denotes the size of the embedding vector mathbf h0 which equals 408 464 and 260 for our respective POS models as ordered in Table 2 In contrast each LSTM layer requires eight products per timestep and the BTS model has four layers P Q 320 The particular sequencetosequence representation scheme of gillick2016brnn requires at least four timesteps to produce a meaningful output the individual input bytes and a start length and label of the predicted span A single timestep is therefore a relaxed lower bound on the number of FLOPs needed for BTS inference The word clusters we use are for the 250k most frequent words from a large unannotated corpus that was clustered into 256 classes using the distributed Exchange algorithm BIBREF36 and the procedure described in Appendix A of tackstrom2012clusters The space required to store them in a Bloom map is calculated using the formula derived by talbot2008bloom each entry requires 123log frac1epsilon H bits where H is the entropy of the distribution on the set of values and epsilon 2E with E the number of error bits employed We use 0 error bits and assume a uniform distribution for the 256 values ie H8 hence we need 984 bits per entry or 300KB for the 250k entries In our language identification evaluation the 1234gram embedding vectors each have 6 or 16 dimensions depending on the experimental setting Their hashed vocabulary sizes Vg are 100 1000 5000 and 5000 respectively The hidden layer size is fixed at M 208 We preprocess data by removing nonalphabetic characters and pieces of markup text ie anything located between and including the brackets At test time if this results in an empty string we skip the markup removal and if that still results in an empty string we process the original string This procedure is an artefact of the Wikipedia dataset where some documents contain only punctuation or trivial HTML code yet we must make predictions for them to render the results directly comparable to the literature The Small FF model in the comparison to BTS uses 234grams and some byte unigrams see feature templates in Table vii The ngrams have embedding sizes of 16 and the byte unigrams get 4 dimensions In our frac12 dimension setting the aforementioned dimensions are halved to 8 and 2 Cluster features get embedding vectors of size 8 The hashed feature vocabularies for ngrams are 500 200 and 4000 respectively The hidden layer size is fixed at M 320 Feature templates used in segmentation experiments are listed in Table viii Besides we define length feature to be the number of characters between top of sigma and the front of beta this maximum feature value is clipped to 100 The length feature is used in all segmentation models and the embedding dimension is set to 6 We set the cutoff for both character and characterbigrams to 2 in order to learn unknown characterbigram embeddings The hidden layer size is fixed at M 256 The feature templates for the preorderer look at the top four spans on the stack and the first four spans in the buffer for each span the feature templates look at up to the first two words and last two words within the span The vanilla variant of the preorderer includes character ngrams word bytes and whether the span has ever participated in a swap transition The POS features are the predicted tags for the words in these positions Table ix shows the full feature templates for the preorderer </s>
<s>[INST] Back to the Future -- Sequential Alignment of Text Representations [/INST] As time passes language usage changes For example the names Bert and Elmo would only rarely make an appearance prior to 2018 in the context of scientific writing After the publication of BERT BIBREF0 and ELMo BIBREF1 however usage has increased in frequency In the context of named entities on Twitter it is also likely that these names would be tagged as person prior to 2018 and are now more likely to refer to an artefact As such their partofspeech tags will also differ Evidently evolution of language usage affects multiple natural language processing NLP tasks and models based on data from one point in time cannot be expected to operate for an extended period of time In order to become more robust to language evolution data should be collected at multiple points in time We consider a dynamic learning paradigm where one makes predictions for data points from the current timestep given labelled data points from previous timesteps As time increments data points from the current step are labelled and new unlabelled data points are observed This setting occurs in natural language processing in for instance the prediction of paper acceptance to conferences BIBREF2 or named entity recognition from yearly data dumps of Twitter BIBREF3 Changes in language usage cause a data drift between timesteps and some way of controlling for the shift between timesteps is necessary In this paper we apply a domain adaptation technique to correct for shifts Domain adaptation is a furtive area of research within machine learning that deals with learning from training data drawn from one datagenerating distribution source domain and generalizing to test data drawn from another different datagenerating distribution target domain BIBREF4 We are interested in whether a sequence of adaptations can compensate for the data drift caused by shifts in the meaning of words or features across time Given that linguistic tokens are embedded in some vector space using neural language models we observe that in timevarying dynamic tasks the drift causes token embeddings to occupy different parts of embedding space over consecutive timesteps We want to avoid the computational expense of retraining a neural network every timestep Instead in each timestep we map linguistic tokens using the same pretrained language model a BERT network BIBREF0 and align the resulting embeddings using a second procedure called subspace alignment BIBREF5 We apply subspace alignment sequentially find the principal components in each timestep and linearly transform the components from the previous step to match the current step A classifier trained on the aligned embeddings from the previous step will be more suited to classify embeddings in the current step We show that sequential subspace alignment SSA yields substantial improvements in three challenging tasks paper acceptance prediction on the PeerRead data set BIBREF2 Named Entity Recognition on the Broad Twitter Corpus BIBREF3 and rumour stance detection on the RumourEval 2019 data set BIBREF6 These tasks are chosen to vary in terms of domains timescales and the granularity of the linguistic unitsIn addition to evaluating SSA we include two technical contributions as we extend the method both to allow for time series of unbounded length and to consider instance similarities between classes The bestperforming SSA methods proposed here are semisupervised but require only between 2 and 10 annotated data points per class from the test year for successful alignment Crucially the best proposed SSA models outperform baselines utilising more data including the whole data set Suppose we embed words from a named entity recognition task where artefacts should be distinguished from persons Figure FIGREF1 shows scatterplots with data collected at two different timepoints say 2017 top source domain and 2018 bottom target domain Red points are examples of artefacts embedded in this space and blue points are examples of persons We wish to classify the unknown points black from 2018 using the known points from 2018 and the known points from 2017 As can be seen the data from 2017 is not particularly relevant to classification of data from 2018 because the red and blue point clouds do not match In other words a classifier trained to discriminate red from blue in 2017 would make many mistakes when applied directly to the data from 2018 partly because words such as Bert and Elmo have changed from being person to also being artefacts To make the source data from 2017 relevant and reap the benefits of having more data we wish to align source and target data points Unsupervised alignment extracts a set of bases from each data set and transforms the source components such that they match the target components BIBREF5 Let Ccal S be the principal components of the source data Xt1 and Ccal T be the components of the target data set Xt The optimal linear transformation matrix is found by minimising the difference between the transformed source components and the target components where Vert cdot Vert F denotes the Frobenius norm Note that we leftmultiplied both terms in the norm with the same matrix Ccal Stop and that due to orthonormality of the principal components Ccal Stop Ccal S is the identity and drops out Source data Xt1 is aligned to target data by first mapping it onto its own principal components and then applying the transformation matrix Xt1 Ccal S M Target data Xt is also projected onto its target components Xt Ccal T The alignment is performed on the d largest principal components ie a subspace of the embedding Keeping d small avoids the otherwise high computational expense of eigendecomposition in highdimensional data Unsupervised alignment will only match the total structure of both data sets Therefore global shifts between domains can be accounted for but not local shifts Figure FIGREF1 is an example of a setting with local shifts ie red and blue classes are shifted differently Performing unsupervised alignment on this setting would fail Figure FIGREF2 left middle shows the source data left aligned to the target data right in an unsupervised fashion Note that although the total data sets match the classes red and blue ellipses are not matched In semisupervised alignment one performs subspace alignment per class As such at least 1 target label per class needs to be available However even then with only 1 target label per class we would only be able to find 1 principal component To allow for the estimation of more components we provisionally label all target samples using a 1nearestneighbour classifier starting from the given target labels Using pseudolabelled target samples we estimate d components Now the optimal linear transformation matrix for each class can be found with an equivalent procedure as in Equation DISPLAYFORM4 Afterwards we transform the source samples of each class Xt1k through the projection onto classspecific components Ccal Sk and the optimal transformation Xt1k Ccal S k Mk Additionally we centre each transformed source class on the corresponding target class Figure FIGREF2 right middle shows the source documents transformed through semisupervised alignment Now the classes match the classes of the target data Semisupervised alignment allows for aligning two time steps t1 and t2 to a joint space tprime 12 However when considering a further alignment to another time step t3 this can not trivially be mapped since the joint space tprime 12 necessarily has a lower dimensionality Observing that two independently aligned spaces tprime 12 and tprime 23 do have the same dimensionality we further learn a new alignment between the two resulting in the joint space of tprime 12 and tprime 23 namely tprime prime 123 Although this is seemingly straightforward there is no guarantee that tprime 12 and tprime 23 will be coherent with one another in the same way that two word embedding spaces trained with different algorithms might also differ in spite of having the same dimensionality This issue is partially taken care of by using semisupervised alignment which takes class labels into account when learning the deeper alignment tprime prime We further find that it is beneficial to also take the similarities between samples into account when aligning Since intermediary spaces such as tprime 12 and tprime 23 do not necessarily share the same semantic properties we add a step to the semisupervised alignment procedure Given that the initial unaligned spaces do encode similarities between instances we run the kmeans clustering algorithm k5 to give us some coursegrained indication of instance similarities in the original embedding space This cluster ID is passed to SSA resulting in an alignment which both attempts to match classes across time steps in addition to instance similarities Hence even though tprime 12 and tprime 23 are not necessarily semantically coherent an alignment to tprime prime 123 is made possible In the past year several approaches to pretraining representations on language modelling based on transformer architectures BIBREF7 have been proposed These models essentially use a multihead selfattention mechanism in order to learn representations which are able to attend directly to any part of a sequence Recent work has shown that such contextualised representations pretrained on language modelling tasks offer highly versatile representations which can be finetuned on seemingly any given task BIBREF1 BIBREF0 BIBREF8 BIBREF9 In line with the recommendations from experiments on finetuning representations BIBREF10 we use a frozen BERT to extract a consistent taskagnostic representation Using a frozen BERT with subsequent subspace alignment allows us to avoid retraining a neural network each timestep while still working in an embedding learned by a neural language model It also allows us to test the effectiveness of SSA without the confounding influence of representation updates We consider three tasks representing a broad selection of natural language understanding scenarios paper acceptance prediction based on the PeerRead data set BIBREF2 Named Entity Recognition NER based on the Broad Twitter Corpus BIBREF3 and author stance prediction based on the RumEval19 data set BIBREF6 These tasks were chosen so as to represent i different textual domains across ii differing time scales and iii operating at varying levels of linguistic granularity As we are dealing with dynamical learning the vast majority of NLP data sets can unfortunately not be used since they do not include time stamps The PeerRead data set contains papers from ten years of arXiv history as well as papers and reviews from major AI and NLP conferences BIBREF2 From the perspective of evaluating our method the arXiv subset of this data set offers the possibility of evaluating our method while adapting to ten years of history This is furthermore the only subset of the data annotated with both timestamps and with a relatively balanced acceptreject annotation As arXiv naturally contains both accepted and rejected papers this acceptance status has been assigned based on BIBREF11 who match arXiv submissions to bibliographic entries in DBLP and additionally defining acceptance as having been accepted to major conferences and not to workshops This results in a data set of nearly 12000 papers from which we use the raw abstract text as input to our system The first three years were filtered out due to containing very few papers We use the standard traintest splits supplied with the data set BIBREF2 show that it is possible to predict paper acceptance status at major conferences at above baseline levels Our intuition in applying SSA to this problem is that the topic of a paper is likely to bias acceptance to certain conferences across time For instance it is plausible that the likelihood of a neural paper being accepted to an NLP conference before and after 2013 differs wildly Hence we expect that our model will to some extent represent the topic of an article and that this will lend itself nicely to SSA We use the pretrained bertbaseuncased model as the base for our paper acceptance prediction model Following the approach of BIBREF0 we take the final hidden state ie the output of the transformer corresponding to the special CLS token of an input sequence to be our representation of a paper as this has aggregated information through the sequence Figure FIGREF14 This gives us a ddimensional representation of each document where d786 In all of the experiments for this task we train an SVM with an RBF kernel on these representations either with or without SSA depending on the setting We set up a series of experiments where we observe past data and evaluate on present data We compare both unsupervised and semisupervised subspace alignment with several strong baselines The baselines represent cases in which we have access to more data and consist of training our model on either all data on the same year as the evaluation year and on the previous year In our alignment settings we only observe data from the previous year and apply subspace alignment This is a different task than presented by BIBREF2 as we evaluate paper acceptance for papers in the present Hence our scores are not directly comparable to theirs One parameter which significantly influences performance is the number of labelled data points we use for learning the semisupervised subspace alignment We tuned this hyperparameter on the development set finding an increasing trend Using as few as 2 tuning points per class yielded an increase in performance in some cases Figure FIGREF16 Our results are shown in Table TABREF10 using 10 tuning samples per class With unsupervised subspace alignment we observe relatively unstable results in one exceptional case namely testing on 2010 unsupervised alignment is as helpful as semisupervised alignment Semisupervised alignment however yields consistent improvements in performance across the board It is especially promising that adapting from past data outperforms training on all available data as well as training on the actual indomain data This highlights the importance of controlling for data drift due to language evolution It shows that this signal can be taken advantage of to increase performance on present data with only a small amount of annotated data We further find that using several past time steps in the Unbounded condition is generally helpful as is using instance similarities in the alignment The Broad Twitter Corpus contains tweets annotated with named entities collected between the years 2009 and 2014 BIBREF3 However as only a handful of tweets are collected before 2012 we focus our analysis on the final three years of this period ie two test years The corpus includes diverse data annotated in part via crowdsourcing and in part by experts The inventory of tags in their tag scheme is relatively small including Person Location and Organisation To the best of our knowledge no one has evaluated on this corpus either in general or per year and so we cannot compare with previous work In the case of NER we expect the adaptation step of our model to capture the fact that named entities may change their meaning across time eg the example with Bert and BERT in Figure FIGREF1 This is related to work showing temporal drift of topics BIBREF12 Since casing is typically an important feature in NER we use the pretrained bertbasecased model as our base for NER For each token we extract its contextualised representation from BERT before applying SSA As BIBREF0 achieve stateoftheart results without conditioning the predicted tag sequence on surrounding tags as would be the case with a CRF for example we also opt for this simpler architecture The resulting contextualised representations are therefore passed to an MLP with a single hidden layer 200 hidden units ReLU activation before predicting NER tags We train the MLP over 5 epochs using the Adam optimiser BIBREF13 As with previous experiments we compare unsupervised and semisupervised subspace alignment with baselines corresponding to using all data data from the same year as the evaluation year and data from the previous year For each year we divide the data into 801010 splits for training development and test Results on the two test years 2013 and 2014 are shown in Table TABREF17 In the case of NER we do not observe any positive results for unsupervised subspace alignment In the case of semisupervised alignment however we find increased performance as compared to training on the previous year and compared to training on all data This shows that learning an alignment from just a few data points can help the model to generalise from past data However unlike our previous experiments results are somewhat better when given access to the entire set of training data from the test year itself in the case of NER The fact that training on only 2013 and evaluating on the same year does not work well can be explained by the fact that the amount of data available for 2013 is only 10 of that for 2012 The identical results for the unbounded extension is because aligning from a single time step renders this irrelevant The RumourEval2019 data set consists of roughly 5500 tweets collected for 8 events surrounding wellknown incidents such as the Charlie Hebdo shooting in Paris BIBREF6 Since the shared task test set is not available we split the training set into a training dev and test part based on rumours one rumour will be training data with a 9010 split for development and another rumour will be the test data with a few samples labelled For Subtask A tweets are annotated with stances denoting whether it is in the category Support Deny Query or Comment SDQC Each rumour only lasts a couple of days but the total data set spans years from August 2014 to November 2016 We regard each rumour as a timestep and adapt from the rumour at time t1 to the rumour at time t We note that this setting is more difficult than the previous two due to the irregular time intervals We disregard the rumour ebolaessien as it has too few samples per class For this task we use the same modelling approach as described for paper acceptance prediction This method is also suitable here since we simply require a condensed representation of a few sentences on which to base our temporal adaptation and predictions In the last iteration of the task the winning system used handcrafted features to achieve a high performance BIBREF14 Including these would complicate SSA so we opt for this simpler architecture instead We use the shorter timescale of approximately weeks rather than years as rumours can change rapidly BIBREF15 In this experiment we start with the earliest rumour and adapt to the next rumour in time As before we run the following baselines training on all available labelled data ie all previous rumours and the labelled data for the current rumour training on the labelled data from the current rumour designated as same and training on the labelled data from the previous rumour We perform both unsupervised and semisupervised alignment using data from the previous rumour We label 5 samples per class for each rumour In this data set there is a large class imbalance with a large majority of comment tweets and few support or deny tweets To address this we oversample the minority classes Afterwards a SVM with RBF is trained and we test on unlabelled tweets for the current rumour Table TABREF23 shows the performance of the baselines and the two alignment procedures As with the previous tasks semisupervised alignment generally helps except for in the charliehebdo rumour We have shown that sequential subspace alignment is useful across natural language processing tasks For the PeerRead data set we were particularly successful This might be explained by the fact that the topic of a paper is a simple feature for SSA to pick up on while being predictive of a papers acceptance chances For NER on the other hand named entities can change in less predictable ways across time proving a larger challenge for our approach For SDQC we were successful in cases where the tweets are nicely clustered by class For instance where both rumours are about terrorist attacks many of the support tweets were headlines from reputable newspaper agencies These agencies structure tweets in a way that is consistently dissimilar from comments and queries The effect of our unbounded time extension boosts results on the PeerRead data set as the data stretches across a range of years In the case of NER however this extension is excessive as only two time steps are available In the case of SDQC the lack of improvement could be due to the irregular time intervals making it hard to learn consistent mappings from rumour to rumour Adding instance similarity clustering aids alignment since considering sample similarities across classes is important over longer time scales Finally we set up the following simplified experiment to investigate the effect of alignment on SDQC data First we consider the rumour charliehebdo where we picked the following tweet px SupportFrance 10 people dead after shooting at HQ of satirical weekly newspaper CharlieHebdo according to witnesses URL px It has been labeled to be in support of the veracity of the rumour We will consider the scenario where we use this tweet and others involving the charliehebdo incident to predict author stance in the rumour germanwingscrash Before alignment the following 2 germanwingscrash tweets are among the nearest neighbours in the embedding space px Query USER USER if they had its likely the descent rate wouldve been steeper and the speed not reduce no px Comment USER Praying for the families and friends of those involved in crash Im so sorry for your loss The second tweet is semantically similar both are on the topic of tragedy but the other is unrelated Note that the news agency tweet differs from the comment and query tweets in that it stems from a reputable source mentions details and includes a reference After alignment the charliehebdo tweet has the following 2 nearest neighbours px Support USER 148 passengers were on board GermanWings Airbus A320 which has crashed in the southern French Alps URL px Support Report CoPilot Locked Out Of Cockpit Before Fatal Plane Crash URL Germanwings URL Now both neighbours are of the support class This example shows that semisupervised alignment maps source tweets from one class close to target tweets of the same class A necessary assumption in both unsupervised and semisupervised subspace alignment is that the data of each class is clustered in the embedding space In other words that most embedded tokens lie closer to other embedded tokens of the same class than to embedded tokens of another class If this is not the case then aligning to a few labelled samples of class k does not mean that the embedded source tokens are aligned to any other target points of class k This assumption is violated if for instance people only discuss one aspect of a rumour on day one and discuss several aspects of a rumour simultaneously on day two One would observe a single cluster of token embeddings for supporters of the rumour initially and several clusters at a later timestep There is no unique solution to aligning a single cluster to several clusters Additionally if the few samples labeled in the current timestep for semisupervised alignment are falsely labeled or their label is ambiguous eg a tweet that could equally be labeled as query or deny then it is possible that the source data is aligned to the wrong target data point cloud It is important that the few labeled tokens actually represent their classes This is a common requirement in semisupervised learning and is not specific to sequential alignment of text representations The temporal nature of data can have a significant impact in natural language processing tasks For instance BIBREF16 compare a number of approaches to diachronic word embeddings and detection of semantic shifts across time For instance such representations can be used to uncover changes of word meanings or senses of new words altogether BIBREF17 BIBREF18 BIBREF19 BIBREF20 BIBREF21 Other work has investigated changes in the usage of parts of speech across time BIBREF22 BIBREF23 investigate the changing meanings and associations of words across time in the perspective of language change By learning timeaware embeddings they are able to outperform standard word representation learning algorithms and can discover eg equivalent technologies through time BIBREF24 show that lexical features can change their polarity across time which can have a significant impact in sentiment analysis BIBREF12 show that associating topics with continuous distributions of timestamps yields substantial improvements in terms of topic prediction and interpretation of trends Temporal effects in NLP have also been studied in the context of scientific journals for instance in the context of emerging themes and viewpoints BIBREF25 BIBREF26 and in terms of topic modelling on news corpora across time BIBREF27 Finally in the context of rumour stance classification BIBREF28 show that temporal information as a feature in addition to textual content offers an improvement in results While this previous work has highlighted the extent to which language change across time is relevant for NLP we present a concrete approach to taking advantage of this change Nonetheless these results could inspire more specialized forms of sequential adaptation for specific tasks Unsupervised subspace alignment has been used in computer vision to adapt between various types of representations of objects such as highdefinition photos online retail images and illustrations BIBREF5 BIBREF29 Alignment is not restricted to linear transformations but can be made nonlinear through kernelisation BIBREF30 An extension to semisupervised alignment has been done for images BIBREF31 but not in the context of classification of text embeddings or domain adaptation on a sequential basis In this paper we introduced sequential subspace alignment SSA for natural language processing NLP which allows for improved generalisation from past to present data Experimental evidence shows that this method is useful across diverse NLP tasks in various temporal settings ranging from weeks to years and for wordlevel and documentlevel representations The bestperforming SSA method aligning subspaces in a semisupervised way outperforms simply training on all data with no alignment </s>
<s>[INST] Analyzing Language Learned by an Active Question Answering Agent [/INST] BIBREF0 propose a reinforcement learning framework for question answering called active question answering ActiveQA that aims to improve answering by systematically perturbing input questions cf BIBREF1 Figure 1 depicts the generic agentenvironment framework The agent AQA interacts with the environment E in order to answer a question q0 The environment includes a question answering system QA and emits observations and rewards A state st at time t is the sequence of observations and previous actions generated starting from q0 stx0u0x1ldots ut1xt where xi includes the question asked qi the corresponding answer returned by the QA system ai and possibly additional information such as features or auxiliary tasks The agent includes an action scoring component U which produced and action ut by deciding whether to submit a new question to the environment or to return a final answer Formally utin mathcal Qcup mathcal A where st0 is the set of all possible questions and st1 is the set of all possible answers The agent relies on a question reformulation system QR that provides candidate follow up questions and on an answer ranking system AR which scores the answers contained in st2 Each answer returned is assigned a reward The objective is to maximize the expected reward over a set of questions BIBREF0 present a simplified version of this system with three core components a question reformulator an offtheshelf black box QA system and a candidate answer selection model The question reformulator is trained with policy gradient BIBREF2 to optimize the F1 score of the answers returned by the QA system to the question reformulations in place of the original question The reformulator is implemented as a sequencetosequence model of the kind used for machine translation BIBREF3 BIBREF4 When generating question reformulations the actionspace is equal to the size of the vocabulary typically 16k sentence pieces Due to this large number of actions we warm start the reformulation policy with a monolingual sequencetosequence model that performs generic paraphrasing This model is trained using the zeroshot translation technique BIBREF5 on a large multilingual parallel corpus BIBREF6 followed by regular supervised learning on a smaller monolingual corpus of questions BIBREF7 The reformulation and selection models form a trainable agent that seeks the best answers from the QA system The reformulator proposes N versions qi of the input question q0 and passes them to the environment which provides N corresponding answers ai The selection model scores each triple q0qiai and returns the topscoring candidate Crucially the agent may only query the environment with natural language questions Thus ActiveQA involves a machinemachine communication process inspired by the humanmachine communication that takes place when users interact with digital services during information seeking tasks For example while searching for information on a search engine users tend to adopt a keywordlike queryese style of questioning The AQA agent proves effective at reformulating questions on SearchQA BIBREF8 a large dataset of complex questions from the Jeopardy game For this task BiDAF is chosen for the environment BIBREF9 a deep network built for QA which has produced stateoftheart results Compared to a QA system that forms the environment using only the original questions AQA outperforms this baseline by a wide margin 114 absolute F1 thereby reducing the gap between machine BiDAF and human performance by 66 Here we perform a qualitative analysis of this communication process to better understand what kind of language the agent has learned We find that while optimizing its reformulations to adapt to the language of the QA system AQA diverges from well structured language in favour of less fluent but more effective classic information retrieval IR query operations These include term reweighting tfidf expansion and morphological simplificationstemming We hypothesize that the explanation of this behaviour is that current machine comprehension tasks primarily require ranking of short textual snippets thus incentivizing relevance more than deep language understanding We analyze input questions and reformulations on the 12k example development partition of the SearchQA dataset Our goal is to gain insights on how the agents language evolves during training via policy gradient It is important to note that in the SearchQA dataset the original Jeopardy clues have been preprocessed by lowercasing and stop word removal The resulting preprocessed clues that form the sources inputs for the sequencetosequence reformulation model resemble more keywordbased search queries than grammatical questions For example the clue Gandhi was deeply influenced by this count who wrote War and Peace is simplified to gandhi deeply influenced count wrote war peace Figure 2 summarizes statistics of the questions and rewrites which may shed some light on how the language changes The preprocessed SearchQA questions contain 96 words on average They contain few repeated terms computed as the mean term frequency TF per question The average is 103 but for most of the queries TF is 10 ie no repetitions We also compute the median document frequency DF per query where a document is the context from which the answer is selected DF gives a measure of how informative the question terms are We first consider the top hypothesis generated by the pretrained NMT reformulation system before reinforcement learning BaseNMT This system is trained with full supervision using a large multilingual and a small monolingual dataset The BaseNMT rewrites differ greatly from their sources They are shorter 63 words on average and have even fewer repeated terms 101 Interestingly these reformulations are mostly syntactically wellformed questions For example the clue above becomes Who influenced count wrote war BaseNMT improves structural language quality by properly reinserting dropped function words and whphrases We also verified the increased fluency by using a large language model and found that the BaseNMT rewrites are 50 more likely than the original questions The bottom right hand plot in Figure 2 summarizes the language model distributions LM WordLogP The plot shows the average pertoken language model negative log probabilities a lower score indicates greater fluency Although the distributions overlap to a great extent due to the large variance across questions the differences in means are significant While more fluent the BaseNMT rewrites involve rarer terms as indicated by the decrease in DF This is probably due to a domain mismatch between SearchQA and the NMT training corpus We next consider the top hypothesis generated by the AQA question reformulator AQAQR after the policy gradient training The AQAQR rewrites are those whose corresponding answers are evaluated as AQA Top Hyp in BIBREF0 Note these single rewrites alone outperform the original SearchQA queries by a small margin 2 on test We analyze the top hypothesis instead of the final output of the full AQA agent to avoid confounding effects from the answer selection step These rewrites look different from both the BaseNMT and the SearchQA ones For the example above AQAQRs top hypothesis is What is name gandhi gandhi influence wrote peace peace Surprisingly 998 start with the prefix What is name The second most frequent is What country is 81 times followed by What is is 70 and What state 14 This is puzzling as it happens only for 9 BaseNMT rewrites and never in the original SearchQA questions We speculate it might be related to the fact that virtually all answers involve names of named entities Micronesia or generic concepts pizza AQAQRs rewrites are visibly less fluent than both the SearchQA and the BaseMT counterparts In terms of language model probability they are less likely than both SearchQA and BaseNMT However they have more repeated terms 12 average TF are significantly longer 119 than the BaseNMT initialization and contain more informative context terms lower DF than SearchQA questions Additionally AQAQRs reformulations contain morphological variants in 125 of cases The number of questions that contain multiple tokens with the same stem doubles from SearchQA to AQAQR Singular forms are preferred over plurals Morphological simplification is useful because it increases the chance that a word variant in the question matches the context Recently BIBREF10 trained chatbots that negotiate via language utterances in order to complete a task They report that the agents language diverges from human language if there is no incentive for fluency in the reward function Our findings seem related The fact that the questions reformulated by AQA do not resemble natural language is not due to the keywordlike SearchQA input questions because BaseNMT is capable of producing fluent questions from the same input AQA learns to reweight terms by focusing on informative lower DF terms while increasing term frequency TF via duplication At the same time it learns to modify surface forms in ways akin to stemming and morphological analysis Some of the techniques seem to adapt also to the specific properties of current deep QA architectures such as characterbased modelling and attention Sometimes AQA learns to generate semantically nonsensical novel surface term variants eg it might transform the adjective dense to densey The only justification for this is that such forms can be still exploited by the characterbased BiDAF question encoder Finally repetitions can directly increase the chances of alignment in the attention components We hypothesize that there is no incentive for the model to use human language due to the nature of the task AQA learns to ask BiDAF questions by optimizing a language that increases the likelihood of BiDAF extracting the right answer BIBREF11 argue that reading comprehension systems are not capable of significant language understanding and fail easily in adversarial settings We suspect that current machine comprehension tasks involve mostly simple pattern matching and relevance modelling As a consequence deep QA systems behave as sophisticated ranking systems trained to sort snippets of text from the context As such they resemble document retrieval systems which incentivizes the rediscovery of IR techniques that have been successful for decades BIBREF12 </s>
<s>[INST] Deep Semi-Supervised Learning with Linguistically Motivated Sequence Labeling Task Hierarchies [/INST] It is natural to think of NLP tasks existing in a hierarchy with each task building upon the previous tasks For example Part of Speech POS is known to be an extremely strong feature for Noun Phrase Chunking and downstream tasks such as greedy Language Modeling LM can make use of information about the syntactic and semantic structure recovered from junior tasks in making predictions Conversely information about downstream tasks should also provide information that aids generalisation for junior downstream tasks a form of semisupervised learning Arguably there is a twoway relationship between each pair of tasks Following work such as sogaard2016deep that exploits such hierarchies in a fully supervised setting we represent this hierarchical relationship within the structure of a multitask Recurrent Neural Network RNN where junior tasks in the hierarchy are supervised on inner layers and the parameters are jointly optimised during training Joint optimisation within a hierarchical network acts as a form of regularisation in two ways first it forces the network to learn general representations within the parameters of the shared hidden layers BIBREF0 second there is a penalty on the supervised junior layers for forming a representation and making predictions that are inconsistent with senior tasks Intuitively we can see how this can be beneficial when humans receive new information from one task that is inconsistent with with our internal representation of a junior task we update both representations to maintain a coherent view of the world By incorporating an unsupervised auxiliary task eg plank2016multilingual as the most senior layer we can use this structure for semisupervised learning the error on the unsupervised tasks penalises junior tasks when their representations and predictions are not consistent It is the aim of this paper to demonstrate that organising a network in such a way can improve performance To that end although we do not achieve state of the art results we see a small but consistent performance improvement against a baseline A diagram of our model can be seen in Figure 1 Our Contributions When we speak and understand language we are arguably performing many different linguistic tasks at once At the top level we might be trying to formulate the best possible sequence of words given all of the contextual and prior information but this requires us to do lowerlevel tasks like understanding the syntactic and semantic roles of the words we choose in a specific context This paper seeks to examine the POS tagging Chunking and Language Modeling hierarchy and demonstrate that by developing an algorithm that both exploits this structure and optimises all three jointly we can improve performance In the original introductory paper to Noun Phrase Chunking abney1991parsing Chunking is motivated by describing a threephase process first you read the words and assign a Part of Speech tag you then use a Chunker to group these words together into chunks depending on the context and the Parts of Speech and finally you build a parse tree on top of the chunks The parallels between this linguistic description of parsing and our architecture are clear first we build a prediction for POS we then use this prediction to assist in parsing by Chunk which we then use for greedy Language Modeling In this hierarchy we consider Language Modeling as auxiliary designed to improve performance on POS and Chunking and so therefore results are not presented for this task In our model we represent linguistically motivated hierarchies in a multitask BiDirectional Recurrent Neural Network where junior tasks in the hierarchy are supervised at lower layersThis architecture builds upon sogaard2016deep but is adapted in two ways first we add an unsupervised sequence labeling task Language Modeling second we add a lowdimensional embedding layer between tasks in the hierarchy to learn dense representations of label tags In addition to sogaard2016deep Work such as mirowskivlachos2015ACLIJCNLP in which incorporating syntactic dependencies improves performance demonstrates the benefits of incorporating junior tasks in prediction Our neural network has one hidden layer after which each successive task is supervised on the next layer In addition we add skip connections from the hidden layer to the senior supervised layers to allow layers to ignore information from junior tasks A diagram of our network can be seen in Figure 1 Our model has 3 sources of error signals one for each task Since each task is categorical we use the discrete cross entropy to calculate the loss for each task
Hp q sum inlabels plabeli log qlabeli
Where nlabels is the number of labels in the task qlabeli is the probability of label i under the predicted distribution and plabeli is the probability of label i in the true distribution in this case a onehot vector During training with fully supervised data POS Chunk and Language Modeling we optimise the mean cross entropy
Lossxy frac1n sum in Hy ftaskix
Where ftaskix is the predicted distribution on task number i from our model When labels are missing we drop the associated cross entropy terms from the loss and omit the cross entropy calculation from the forward pass Our network is a BiDirectional Recurrent Neural Network BiRNN schuster1997bidirectional with Gated Recurrent Units GRUs cho2014properties chung2014empirical In a BiDirectional RNN we run lefttoright through the sentence and then we run righttoleft This gives us two hidden states at time step t one from the lefttoright pass and one from the righttoleft pass These are then combined to provide a probability distribution for the tag token conditioned on all of the other words in the sentence During training we alternate batches of data with POS and Chunk and Language Model labels with batches of just Language Modeling according to some probability 0 gamma 1 We train our model using the ADAM kingma2014adam optimiser for 100 epochs where one epoch corresponds to one pass through the labelled data We train in batch sizes of 32times 32 We present our experiments on two data sets CoNLL 2000 Chunking data set tjong2000introduction which is derived from the Penn Tree Bank newspaper text marcus1993building and the Genia biomedical corpus kim2003genia derived from biomedical article abstracts These two data sets were chosen since they perform differently under the same classifiers BIBREF1 The unlabelled data for semisupervised learning for newspaper text is the Penn Tree Bank and for biomedical text it a custom data set of Pubmed abstracts We compare the results of our model to a baseline multitask architecture inspired by yang2016multi In our baseline model there are no explicit connections between tasks the only shared parameters are in the hidden layer We also present results for our hierarchical model where there is no training on unlabelled data but there is the LM and confirm previous results that arranging tasks in a hierarchy improves performance Results for both models can be seen for POS in Table 2 and for Chunk in Table 1 Experiments showing the effects of our semisupervised learning regime on models initialised both with and without pretrained word embeddings can be seen in Tables 3 and 4 In models without pretrained word embeddings we see a significant improvement associated with the semisupervised regime However we observe that for models with pretrained word embeddings the positive impact of semisupervised learning is less significant This is likely due to the fact some of the regularities learned using the language model are already contained within the embedding In fact the training schedule of SENNA is similar to that of neural language modelling collobert2011natural Two other points are worthy of mention in the experiments with 100 of the training data First the impact of semisupervised learning on biomedical data is significantly less than on newspaper data This is likely due to the smaller overlap between vocabularies in the training set and vocabularies in the test set Second the benefits for POS are smaller than they are for Chunking this is likely due to the POS weights being more heavily regularised by receiving gradients from both the Chunking and Language Modeling loss Finally we run experiments with only a fraction of the training data to see whether our semisupervised approach makes our models more robust Tables 3 and 4 Here we find variable but consistent improvement in the performance of our tasks even at 1 of the original training data Our model structure includes an embedding layer between each task This layer allows us to learn lowdimensional vector representations of labels and expose regularities in a way similar to eg mikolov2013distributed We demonstrate this in Figure 2 where we present a TSNE visualisation of our label embeddings for Chunking and observe clusters along the diagonal In this paper we have demonstrated two things a way to use hierarchical neural networks to conduct semisupervised learning and the associated performance improvements and a way to learn lowdimensional embeddings of labels Future work would investigate how to address Catastrophic Forgetting BIBREF2 the problem in Neural Networks of forgetting previous tasks when training on a new task which leads to the requirement for the mix parameter gamma in our algorithm and prevents such models such as ours from scaling to larger supervised task hierarchies where the training data may be various and disjoint </s>
<s>[INST] Context-Aware Learning for Neural Machine Translation [/INST] Despite its rapid adoption by academia and industry and its recent success BIBREF0 neural machine translation has been found largely incapable of exploiting additional context other than the current source sentence This incapability stems from the fact that largercontext machine translation systems tend to ignore additional context such as previous sentences and associated images Much of recent efforts have gone into building a novel network architecture that can better exploit additional context however without much success BIBREF1 BIBREF2 BIBREF3 In this paper we approach the problem of largercontext neural machine translation from the perspective of learning instead We propose to explicitly encourage the model to exploit additional context by assigning a higher logprobability to a translation paired with a correct context than to that paired with an incorrect one We design this regularization term to be applied at token sentence and batch levels to cope with the fact that the benefit from additional context may differ from one level to another Our experiments on documentlevel translation using a modified transformer BIBREF4 reveal that the model trained using the proposed learning algorithm is indeed sensitive to the context contrarily to some previous works BIBREF1 We also see a small improvement in terms of overall quality measured in BLEU These two observations together suggest that the proposed approach is a promising direction toward building an effective largercontext neural translation model A largercontext neural machine translation system extends upon the conventional neural machine translation system by incorporating the context C beyond a source sentence X when translating into a sentence Y in the target language In the case of multimodal machine translation this additional context is an image which the source sentence X describes In the case of documentlevel machine translation the additional context C may include other sentences in a document in which the source sentence X appears Such a largercontext neural machine translation system consists of an encoder fC that encodes the additional context C into a set of vector representations that are combined with those extracted from the source sentence X by the original encoder fX These vectors are then used by the decoder X0 to compute the conditional distribution over the target sequences X1 in the autoregressive paradigm ie X2 where theta is a collection of all the parameters in the neural translation model fX and g are often implemented as neural networks such as recurrent networks with attention BIBREF5 convolutional networks BIBREF6 and selfattention BIBREF7 Training is often done by maximizing the loglikelihood given a set of training triplets mathcal Dtrmathcal Xtrmathcal Ytrmathcal Ctrleftlbrace X1 Y1 C1 ldots XN YN CNrightrbrace The loglikelihood is defined as mathcal Ltheta mathcal D
frac1Nsum n1N
sum t1Tn
log pytnytn XN CN Eq 1 Once training is done it is a standard practice to use beam search to find a translation that approximately maximizes
sum t1T log pytyt X C
Existing approaches to largercontext neural machine translation have mostly focused on either modifying the input or the network architecture BIBREF8 concatenate the previous source sentence to the current source sentence which was followed by BIBREF9 who also concatenate the previous target sentence BIBREF2 explore various concatenation strategies when the additional context is an image Other groups have proposed various modifications to the existing neural translation systems BIBREF10 BIBREF11 BIBREF4 BIBREF12 BIBREF13 BIBREF14 BIBREF15 in the case of documentlevel translation while using usual maximum likelihood learning BIBREF16 on the other hand introduces a discriminator that forces the network to improve signaltonoise ratio in the additional context In parallel there have been many proposals on novel network architectures for multimodal translation BIBREF17 BIBREF18 BIBREF19 BIBREF20 In personalized translation BIBREF21 bias the output distribution according to the context All these previous efforts are clearly distinguished from our work in that our approach focuses entirely on a learning algorithm and is agnostic to the underlying network architecture In this paper we focus on learning rather than a network architecture Instead of coming up with a new architecture that facilitates largercontext translation our goal is to come up with a learning algorithm that can be used with any underlying largercontext neural machine translation system To do so we first notice that by the law of total probability ptheta ytyt X
sum C ptheta ytyt X C pCX
nonumber
mathbb ECsim CX left ptheta ytyt X C right Eq 3 As such over the entire distribution of contexts C given a source X the additional context is overall neutral When the context C is used there are two cases First the context may be useful In this case the model can assign a better probability to a correct target token yt when the context was provided than when it was not ptheta ytyt XC
ptheta ytyt X On the other hand the additional context can certainly be used harmfully ptheta ytyt XC
ptheta ytyt X Although these neutral useful and harmful behaviours are defined at the token level we can easily extend them to various levels by defining the following score functions
texttoken stexttokytcdot log ptheta ytcdot
textsent stextsentYcdot sum t1T log ptheta ytyt cdot
textdata stextdatamathcal Ycdot sum Y in mathcal Y stextsentYcdot
With these scores defined at three different levels we propose to regularize learning to encourage a neural translation system to prefer using the context in a useful way Our regularization term works at all three levelstokens sentences and the entire data and is based on a margin ranking loss BIBREF22 mathcal Rtheta mathcal D
nonumber
alpha d
left
leftsum n1N Tnrightdelta d stextdatamathcal Ymathcal Xmathcal C stextdatamathcal Ymathcal Xright
nonumber
alpha s sum n1N
leftTn delta s stextsentYnXn Cn
right
nonumber left qquad qquad qquad qquad stextsentYnXn right
nonumber
alpha tau sum n1N
sum t1Tn
leftdelta tau stexttokyntynt Xn Cn
right
nonumber left qquad qquad qquad qquad
stexttokyntynt Xn
right Eq 5 where alpha d alpha s and alpha tau are the regularization strengths at the data sentence and tokenlevel delta d delta s and delta tau are corresponding margin values The proposed regularization term explicitly encourages the usefulness of the additional context at all the levels We use the margin ranking loss to only lightly bias the model to use the context in a useful way but not necessarily force it to fully rely on the context as it is expected that most of the necessary information is already contained in the source X and that the additional context C only provides a little complementary information It is not trivial to compute the score when the context was missing based on Eq 3 as it requires 1 the access to pCX and 2 the intractable marginalization over all possible C In this paper we explore the simplest strategy of approximating pCX with the data distribution of sentences ptextdataC We assume that the context C is independently distributed from the source X ie pCXpC and that the context C follows the data distribution This allows us to approximate the expectation by uniformly selecting M training contexts at random
scdot cdot log pcdot cdot approx log frac1M sum m1M pcdot cdot Cm
where Cm is the m th sample A better estimation of pCX is certainly possible One such approach would be to use a largercontext recurrent language model by BIBREF23 Another possible approach is to use an offtheshelf retrieval engine to build a nonparametric sampler We leave the investigation of these alternatives to the future The conditions for neutral useful and harmful context also serve as bases on which we can build an intrinsic evaluation metric of a largercontext neural machine translation system We propose this metric by observing that for a welltrained largercontext translation system
Delta mathcal Dtheta smathcal Ymathcal Xmathcal C theta smathcal Ymathcal X theta 0
while it would be 0 for a largercontext model that completely ignores the additional context We compute this metric over the validation set using the samplebased approximation scheme from above Alternatively we may compute the difference in BLEU Delta BLEUmathcal Dtheta over the validation or test data These metrics are complementary to others that evaluate specific discourse phenomena on specially designed test sets BIBREF9 In Table 1 we present the translation quality in BLEU of the four variants We make a number of observations First the use of previous sentence c does not improve over the baseline ab when the largercontext model was trained only to maximize the loglikelihood 1 We furthermore see that the translation quality of the largercontext model only marginally degrades even when the incorrectly paired previous sentence was given instead Delta BLEUmathcal Dtesttheta 040 implying that this model largely ignores the previous sentence Second we observe that the largercontext model improves upon the baselines trained either without any additional context a or with purely random context b when it was trained with the proposed regularization term d The evaluation metric Delta BLEUmathcal Dtesttheta is also significantly larger than 0 suggesting the effectiveness of the proposed regularization term in encouraging the model to focus on the additional context In Fig 1 we contrast the translation qualities measured in BLEU between having the correctly paired LC and incorrectly paired LCRand previous sentences The sentences in the validation set were sorted according to the difference stextsentYXC stextsentYX and we report the cumulative BLEU scores The gap is large for those sentences that were deemed by the largercontext model to benefit from the additional context This match between the score difference which uses the reference translation and the actual translation quality further confirms the validity of the proposed approach We proposed a novel regularization term for encouraging a largercontext machine translation model to focus more on the additional context using a multilevel pairwise ranking loss The proposed learning approach is generally applicable to any network architecture Our empirical evaluation demonstrates that a largercontext translation model trained by the proposed approach indeed becomes more sensitive to the additional context and outperforms a contextless baseline We believe this work is an encouraging first step toward developing a better contextaware learning algorithm for largercontext machine translation We identify three future directions 1 a better context distribution pCX 2 efficient evaluation of the contextless scores and 3 evaluation using other tasks such as multimodal translation SJ thanks NSERC KC thanks support by AdeptMind eBay TenCent NVIDIA and CIFAR This work was partly supported by Samsung Advanced Institute of Technology Next Generation Deep Learning from pattern recognition to AI and Samsung Electronics Improving Deep Learning using Latent Structure A shared 6layer transformer encoder is used to independently encode an additional context C and a source sentence X
c textTransformerEnc6C
x textTransformerEnc6X
Using x as queries q a multihead attention mechanism attends to c as keyvalues kv The input and output are merged through a gate The final source representation is obtained through a feedforward module FF used in typical transformer layers
hatxc textAttnqx kvc
g textLinearxhatxc
xc textFFgcdot textDropouthatxc 1gcdot x
We use a standard 6layer transformer decoder which attends to xc
ptheta yt yt X C textTransformerDec6yt xc
</s>
<s>[INST] Recurrent Deep Stacking Networks for Speech Recognition [/INST] Ever since the introduction of Deep Neural Networks DNNs to Automatic Speech Recognition ASR tasks BIBREF0 researchers had been trying to use additional inputs to the raw input features We extracted features that are more representative using the first and second order differentiates of the raw input features And we utilized features in multiple neighboring frames to make use of the context information Efforts had been continuously made in designing and modifying models that are more powerful We designed Recurrent Neural Networks RNNs BIBREF1 for contextsensitive applications Convolutional Neural Networks CNNs BIBREF2 for image pattern classification and many other variances of conventional DNNs In addition we reintroduced Long ShortTerm Memory LSTM BIBREF3 making our DNNs more capable of incorporating large amounts of data and making accurate predictions In the area of Robust ASR although it is always helpful to incorporate more data we still lack a model as welldesigned as CNN in Computer Vision CV Many methods were proposed on both frontend BIBREF4 and backend The models in this paper belong to the backend methods Inspired by recent progresses in Natural Language Processing BIBREF5 we proposed the Recurrent Deep Stacking Network RDSN and successfully applied it to Speech Enhancement tasks RDSN utilizes the phoneme information in previous frames as additional inputs to the raw features From another perspective of view this framework transformed the Acoustic Model into a hybrid model consisted of an Acoustic Model and a simple Ngram Language Model on phoneme level In the next section we will explain the framework of RDSN and tricks to compress the outputs Then we will show the experimental results and make a conclusion As indicated in its name Recurrent Deep Stacking Network stacks and concatenates the outputs of previous frames into the input features of the current frame If we view acoustic models in ASR systems as functions projecting input features to the probability density outputs we can see the differences between conventional systems and RDSN clearer Denote the input features at frame t as xt and the output as frame t as yt We can see that RDSN tries to model Pyt xtk xt1 xt xt1 xtk yt yt1 ytk Eq 2 while conventional DNNs try to model Pyt xtk xt1 xt xt1 xtk Eq 3 Note that if we want the RDSN to be causal we can simplify it to Pyt xt xt1 xtk yt yt1 ytk Eq 4 where k s in the above formula represent the number of recurrent frames Figure 1 shows the framework of RDSN Adding yt yt1 ytk as additional inputs we transferred the pure acoustic model into a hybrid model consisted of an acoustic model and a phonemelevel Ngram model representing the relation between current phone and previous phones The phonemelevel Ngram or as in the formula above kgram model provides additional information on phonemelevel trying to make the output of current frame more accurate and robust with respect to noises and reverberations Since the output dimensions of acoustic models are usually of thousands adding multiple recurrent outputs will significantly increase the size of the model We use a compression method based on the correspondence of the DNN output and monophone states For each output dimension we find the corresponding monophone and add its value with the other output dimensions that have the same corresponding monophone This process compresses the dimension from thousands to about forty Compressing the output dimensions enables us to add multiple recurrent outputs and keep the input vector dense Originated from a similar idea as of RDSN a BiPass Stacking Network BPSN takes as input both conventional features and the outputs of previous frames But the way BiPass Stacking Network generates the representations of previous frames is through a twopass scheme similar to Deep Stacking Networks DSN BIBREF6 During the first pass BPSN sets all the recurrent inputs to be zeros and concatenates the zeros vector with the extracted features After getting the outputs we compress them and use the compresses outputs as the additional inputs to the second pass The second pass takes as input both the compressed outputs and the extracted features The difference between BPSN and DSN is that instead of stacking representations of different levels all from current frame BPSN utilizes the information from previous frames as well Note that we can extend BiPass Stacking Network to networks with more than two passes naturally To add a third pass we can take as additional inputs the outputs of the second pass Stacking the outputs of the previous passes we can use as many passes as we want We conducted experiments on CHiME4 dataset using all of the single channel utterances The total length of the dataset is around 100 hours The training set is a simple mixture of utterances with different background noise types with has 8738 utterances The development set is consisted of 3280 utterances and test set 2640 For GMMHMM model training and decoding we used the recipe in Kaldi In our preliminary experiments the baseline was a conventional DNN with 6 hidden layers exponential Rectified Linear Units ReLUs and drop out Each hidden layer has 1024 nodes The input features extracted from the utterances were 40 dimensional MFCC features We concatenated 9 features from previous frames and 9 from the following frames forming a 760 dimensional input vector The output has 3161 dimensions For both RDSN and BPSN we took 9 previous outputs as additional inputs forming a 1138 dimensional vector The other network settings were all kept the same as the baseline We used the 15th epoch of the baseline model as the initial models to RDSN and BPSN Some preliminary results are shown in figure 2 We can see from figure 2 that after a short adjustment the cross entropies of RDSN quickly reduced to values substantially lower than that of the baseline system The adjustment may be due to the fact that in the first epoch the weights corresponding to the additional inputs were all random In this paper we proposed a Recurrent Deep Stacking Network RDSN based speech recognition system and an efficient substitute to RDSN BiPass Stacking Network BPSN These two models convert the pure acoustic model into a hybrid structure consisted of both an acoustic model and a phonemelevel Ngram model Note that both RDSN and BPSN can be extended to other types of neural networks like LSTM RNN We tested the performances of our models on CHiME4 dataset and got good results This performance improvement is of high possibility consistent among all types of acoustic models because it provides additional phonemelevel information on top of the acoustic model Future work includes using a larger RDSN to compete with the stateofart models on CHiME challenges and designing recurrent deep stacking networks from LSTM RNNs </s>