text
stringlengths
4
222k
label
int64
0
4
Fluent information flow is important in any information-intensive area of decision making, but critical in healthcare. Clinicians are responsible for making decisions with even life-and-death impact on their patients' lives. The flow is defined as links, channels, contact, or communication to a pertinent person or people in the organisation (Glaser et al., 1987) . In Australian healthcare, failures in this flow are associated with over onetenth of preventable adverse events (ACS, 2008; ACS, 2012) . Failures in the flow are tangible in clinical handover, that is, when a clinician is transferring professional responsibility and accountability, for example, at shift change (AMA, 2006) . Regardless of verbal handover being accurate and comprehensive, anything from two-thirds to all of this information is lost after three to five shifts if no notes are taken or they are taken by hand (Pothier et al., 2005; Matic et al., 2011) .There is a proposal to use a semi-automated approach of speech to text (STT) and information extraction (IE) for taking the handover notes (Suominen et al., 2013) . First, a STT (a.k.a. speech recognition) engine converts verbal information into written, free-form text. Then, an IE system fills out a handover form by automatically identifying relevant text-snippets for each slot of the form. Finally, this pre-filled form is given to a clinician to proof and sign off.The semi-automated approach evokes an STT challenge.First, the correctness of STT is challenged by background noise, other people's voices, and other characteristics of clinical practise that are far from a typical setting in a peaceful office. Second, the STT errors multiply when cascaded with IE. Third, correctness in cascaded STT and IE needs to be carefully evaluated as excellent, because of the severe implications that errors may have in clinical decision-making. In summary, the original voice (i.e., information) in the big noise from clinical setting and STT errors needs to be heard.Motivated by this challenge, we provide an analysis of STT errors and discuss the feasibility of phonetic similarity for their correction in this paper. Phonetic similarity (PS, a.k.a phonetic distance) addresses perceptual confusion between speech sounds and is used to improve STT (Mermelstein, 1976) . To illustrate phonetically similar words, PS measures can be seen as the rites of righting writing, that is right.The rest of the paper is organised as follows: In Section 2, we provide background for clinical STT and IE. In Section 3, we describe our simulated handover data, STT methods, PS measures, and analysis methods. In Section 4, we present the results of the error analysis and discuss the feasibility of phonetic similarity for error correction. In Section 5, final conclusions and directions for future work are given.
0
Previous research has shown that formal ontologies could be used as a means not only to provide a uniform and flexible approach to integrating and describing heterogeneous data sources, but also to support the final user in querying them, thus improving the usability of the integrated system. To support the wide access to these data sources, it is crucial to develop efficient and user-friendly ways to query them (Wache et al., 2001) .In this paper, we present a Natural Language (NL) interface of an ontology-based query tool, called Quelo 1 , which allows the end user to formulate a query without any knowledge either of the formal languages used to specify ontologies, or of the content of the ontology being used. Following the conceptual authoring approach described in (Tennant et al., 1983; Hallett et al., 2007) , this interface masks the composition of a formal query as the composition of an English text describing the equivalent information needs using natural language generation techniques. The natural language generation system that we propose for Quelo's NL interface departs from similar work (Hallett et al., 2007; Franconi et al., 2010a; Franconi et al., 2011b; Franconi et al., 2010b; Franconi et al., 2011a) in that it makes use of standard grammar based surface realisation techniques. Our contribution is two fold. First, we introduce a chart based surface realisation algorithm which supports the kind of incremental processing required by ontology driven query formulation. Crucially, this algorithm avoids confusing the end user by preserving a consistent ordering of the query elements throughout the incremental query formulation process. Second, we show that grammar based surface realisation better supports the generation of fluent, natural sounding queries than previous template-based approaches.The paper is structured as follows. Section 2 discusses related work and situates our approach. Section 3 describes the task being addressed namely, ontology driven query formulation. It introduces the input being handled, the constraints under which generation operates and the operations the user may perform to build her query. In Section 4, we present the generation algorithm used to support the verbalisation of possible queries. Section 5 reports on an evaluation of the system with respect to fluency, clarity, coverage and incrementality. Section 6 concludes with pointers for further research.
0
People use analogy heavily in written explanations. Instructional texts, for example, use analogy to convey new concepts and systems of related ideas to learners. Any learning by reading system must ultimately include the capability of understanding such analogies. Here we combine Gentner's (1983) structure-mapping theory with ideas from dialogue act theory (Traum, 2000) to describe a catalog of analogical dialogue acts (ADAs) which capture the functional roles that discourse elements play in instructional analogies. We outline criteria for identifying ADAs in text and describe what operations they suggest for discourse processing. We provide evidence that this model captures important aspects of understanding instructional analogies via a simulation that uses knowledge gleaned from reading instructional analogies to answer questions.We start by reviewing the relevant aspects of structure-mapping theory and dialogue act theory. Then we describe our catalog of analogical dialogue acts, based on a theoretical analysis of the roles structure-mapping operations can play in language understanding. A prototype implementation of these ideas is described next, followed by an experiment illustrating that these ideas can be used to understand analogies in text, based on answering questions. We close with a discussion of related and future work.
0
Les lexiques bilingues sont une ressource importante pour différentes applications relevant du traitement automatique des langues comme en traduction assistée par ordinateur ou en recherche d'information inter-langue. Bien que les travaux s'appuyant sur des corpus parallèles 1 aient montré de très bons résultats, ce type de corpus reste difficile à collecter (Fung et Yee, 1998) et 1. Un corpus parallèle est un ensemble de textes accompagnés de leurs traductions dans une ou plusieurs langues (Bowker et Pearson, 2002) .plus particulièrement quand il s'agit de traiter des corpus spécialisés ou des couples de langues rares ou moins usitées . L'exploitation des corpus comparables 2 a marqué un tournant dans la tâche d'extraction de lexiques bilingues, et suscite un intérêt constant depuis le milieu des années 1990 grâce à l'abondance et la disponibilité de tels corpus (Rapp, 1995; Fung, 1995; Rapp, 1999; Déjean et al., 2002; Gaussier et al., 2004; Laroche et Langlais, 2010) . L'essor du Web ayant sensiblement facilité la collecte de grandes quantités de données multilingues, les corpus comparables se sont naturellement imposés comme une alternative aux corpus parallèles. Ils ont donné lieu à plusieurs travaux dont le dénominateur commun est l'hypothèse selon laquelle les mots qui sont en correspondance de traduction, ont de grandes chances d'apparaître dans les mêmes contextes (Rapp, 1999) . Cette hypothèse découle directement de la proposition souvent citée de Firth (1957) : « On reconnaît un mot à ses fréquentations » 3 . Rapp (1995) et Fung (1995) ont été les premiers à introduire les corpus comparables. Ils se sont appuyés sur l'idée de caractérisation du contexte des mots, contrairement aux travaux s'appuyant sur les corpus parallèles, qui eux se basaient sur des informations positionnelles. En 1998 a introduit la méthode directe, reprise dans de nombreux travaux, notamment ceux de (Rapp, 1999) . Dans cette méthode, la traduction d'un mot comporte plusieurs étapes. Le mot est tout d'abord caractérisé par un vecteur représentatif de son contexte. Puis, ce vecteur est traduit dans la langue cible à l'aide d'un dictionnaire aussi appelé lexique de transfert ou lexique pivot. Enfin, il reste à comparer ce vecteur avec tous les vecteurs de contexte des mots de la langue cible, et en extraire les n plus proches comme traductions candidates. Par la suite, une partie des travaux a porté sur l'adaptation et l'amélioration de cette méthode à différents types de corpus (corpus de langue générale ou de spécialité), et à différentes langues et différents types de termes (termes simples, termes complexes, collocations, etc.) (Déjean et Gaussier, 2002) , . De nouvelles méthodes ont également été proposées telles que l'approche par similarité interlangue (Déjean et Gaussier, 2002) , l'utilisation de l'Analyse en Composantes Canoniques (CCA) (Haghighi et al., 2008) . Récemment, Li et Gaussier (2010) et Li et al. (2011) se sont intéressés à l'aspect inverse qui consiste à améliorer la comparabilité des corpus comparables afin d'augmenter l'efficacité des méthodes d'extraction de lexiques bilingues.La plupart des travaux utilisant les corpus comparables ont comme dénominateur commun le contexte, qui représente le coeur de l'extraction lexicale bilingue. La question principale à se poser est alors la suivante : étant donné un mot quelconque, comment choisir les mots qui caractérisent au mieux son contexte ? Selon l'état de l'art, le contexte d'un mot donné est habituellement représenté par les mots faisant partie de son environnement, c'est-à-dire, les mots qui l'entourent. Ces mots sont extraits, soit à l'aide d'une fenêtre contextuelle (Rapp, 1999; Déjean et Gaussier, 2002) , soit à l'aide des relations de dépendances syntaxiques (Gamallo, 2007) . L'un des problèmes sous-jacent au contexte extrait à l'aide des fenêtres contextuelles est le choix de la taille des fenêtres. Celle-ci est habituellement fixée empiriquement, et bien que différentes études aient montré une tendance à choisir des fenêtres de petite taille quand il s'agit de caractériser des mots fréquents, et des fenêtres de grande taille quand il s'agit de caractériser des mots peu fréquents (Prochasson et Morin, 2009) , cela reste imprécis car il n'y a toujours pas de méthode dite optimale pour le choix de la taille de la fenêtre contextuelle. Quant aux relations de dépendances syntaxiques, leur efficacité est très sensible à la taille des corpus, et bien que cette représentation soit plus intéressante d'un point de vue sémantique, elle atteint ses limites lorsqu'il s'agit de traiter des corpus de petite taille. Une proposition, qui vient naturellement à l'esprit consiste à utiliser conjointement ces deux représentations afin de tirer profit de leurs avantages respectifs. Une première approche exploitant les deux représentations proposée par Andrade et al. (2011) combine quatre modèles statistiques et compare les dépendances lexicales pour identifier les traductions candidates. Dans cet article, nous proposons une autre manière de combiner les deux précédentes représentations contextuelles, partant de l'intuition que cette combinaison permettrait un lissage du contexte en prenant en compte deux informations complémentaires qui sont : (i) l'information globale véhiculée par la représentation par fenêtre contextuelle et (ii) une information sémantique plus fine apportée par les relations de dépendances syntaxiques. L'objectif étant d'améliorer la représentation contextuelle et les performances de l'extraction de lexiques bilingues à partir de corpus comparables.Dans la suite de cet article, nous présentons en section 2 les deux principales stratégies de représentations contextuelles. La section 3 décrit ensuite nos deux approches de combinaison de contextes. La section 4 se concentre sur l'évaluation des méthodes mises en oeuvre. Nous terminons enfin par une discussion en section 5 et une conclusion en section 6.
0
Visual icons play a crucial role in providing information about the extra level of social media information. SemEval 2018 shared task for researchers to predict, given a tweet in English or Spanish, its most likely associated emoji (Barbieri et al., 2018 (Barbieri et al., , 2017 (Task 2, Multilingual Emoji Prediction) , which is organized into two optional subtask (subtask 1 and subtask 2) respectively in English and Spanish.For subtask 1, we adopt a combination model to predict emojis, which consists of traditional Natural Language Processing (NLP) methods and deep learning methods. The results returned by the classifier with traditional NLP features, by the neural network model and by the combination model are voted to get the final result. For subtask 2, we only use deep learning model.
0
The task of question answering (QA) in Natural Language Processing typically involves producing an answer for a given question using a context that contains evidence to support the answer. The latest advances in pre-trained language models resulted in performance close to (and sometimes exceeding) a human performance when fine-tuned on several QA benchmarks , (Brown et al., 2020) , (Bao et al., 2020) , (Raffel et al., 2020) . However, to achieve this result, these models need to be fine-tuned on tens of thousands of examples. In a more realistic and practical scenario, where only a handful of annotated training examples are available, their performance degrades significantly. For instance, (Ram et al., 2021) show that, when only 16 training examples are available, the Robertabase (Liu et al., 2019) and SpanBERT-base (Joshi et al., 2020 ) obtain a F1 score of 7.7 and 18.2 respectively on SQuAD (Rajpurkar et al., 2016) . This is far lower than the F1 score of 90.3 and 92.0 when using the full training set of >100000 examples. Through experimental analysis, we observe that this degradation is majorly attributed to the disparities between fine-tuning and pre-training frameworks (a combination of the input-output design and the training objective). And to address this, we propose a fine-tuning framework (referred to as FewshotQA hereby) that is directly aligned with the pre-training framework, in terms of both the input-output design and the training objective. Specifically, we construct the input as a concatenation of the question, a mask token and context (in that order) and fine-tune a text-to-text pre-trained model using the same objective used during its pre-training to recover the answer. These text-totext pre-trained model(s) were originally trained to recover missing spans of text in a given input sequence. And since our proposed fine-tuning setup is very much identical to the pre-training setup, this enables the model to make the best use of the pre-training "knowledge" for the fine-tuning task of question answering.The effectiveness of our FewshotQA system is shown in its strong results (an absolute average gain of 34.2 F1 points) on multiple QA benchmarks in a few-shot setting. We show that the gains extend further when used with larger sized models. We also test FewshotQA on a multilingual benchmark by replacing the pre-trained model with its multi-BERT* BART T5x 1 x 2 m x 4 m x 6 m x 8x 1 x 2 x 3 x 4 x 5 x 6 x 7 x 8 x 3 x 5 x 7 m 1 x 2 x 3 m 2 x 5 x 6 x 1 m m x 8 lingual counterpart and observe significant gains in comparison to a strong XLM-Roberta baseline (an absolute gain of 40 F1 points when there are only 16 training examples).2 Few-shot fine-tuning framework designOur proposed few-shot fine-tuning framework design involves a different choice of input-output design and the training objective than the current standard for QA fine-tuning frameworks. We provide a motivation for this design by comparison with the existing frameworks. Figure 1 illustrates this in detail. The pre-training framework is also pictured for comparison. Note that we focus on the bi-directional masked language models (MLMs) instead of the auto-regressive language models (such as GPT-2 (Radford et al., 2019) ) as the MLMs typically are deemed superior for QA tasks , (Lewis et al., 2020) . Figure 1a illustrates the comparison between pretraining setups for three types of models. Firstly, there are BERT-style encoder-only models (referred to as BERT * ) ) that are pre-trained with the standard masked language modeling objective (also called a denoising objective) of predicting the masked tokens in an input sequence I. The masked tokens here typically correspond to a single word or a sub-word. Then, BART (Lewis et al., 2020) uses a corrupted input reconstruction objective to recover the original input.
0
The paper presents a hybrid approach for Deep Semantic Machine Translation. For that purpose, however, the linguistic phenomena that constitute deep semantics have to be defined. A list of such phenomena have been considered in (Hajič, 2011) and (Bos, 2013) , among others. They include but are not limited to the following ones: Semantic Roles (words vs. predicates, Lexical Semantics (Word Sense Disambiguation (WSD)), Multiword Expressions (MWE), Logical Form (LF), Metonymy, Named Entities (NE), Co-reference (pronominal, bridging anaphora), Verb Phrase Ellipsis, Collective/Distributive NPs, Scope (Negation, Quantifiers), Presuppositions, Tense and Aspect, Illocution Force, Textual Entailment, Discourse Structure/ Rhetorical Relations, neo-Davidsonian Events, Background Knowledge, Information Structure etc. All the mentioned phenomena represent various levels of granularity and different linguistic dimensions.In our deep Machine Translation (MT) system we decided to exploit the following components in the transfer phase: Lexical Semantics (WSD), MultiWord Expression (MWE), Named Entities (NE) and Logical Form (LF). For the incorporation of Lexical Semantics through the exploitation of WordNet and Valency dictionary the knowledge-based approach to WSD has been accepted. Concerning the LF, we rely on Minimal Recursion Semantics (MRS) in its two variants -the full one (MRS) and the more underspecified one (Robust MRS (RMRS)). The MWE and NE are parts of the lexicons. We should note that there are also other appropriate LF frameworks that are briefly mentioned below.One of the MRS-related semantic formalisms is the Abstract Meaning Representation (AMR 1 ), which aims at achieving whole-sentence deep semantics instead of addressing various isolated holders of semantic information (such as, NER, coreferences, temporal anchors, etc.). AMR also builds on the available syntactic trees, thus contributing to the efforts on sembanking. It is English-dependent and it makes an extensive use of PropBank framesets (Kingsbury and Palmer, 2002) and (Palmer et al., 2005) . Its concepts are either English words or special keywords. AMR uses approximately 100 relations. They include: frame arguments, general semantic relations, relations for quantities and date-entities, etc.The Groningen Meaning Bank (GMB) integrates various phenomena in one formalism. It has a linguistically motivated, theoretically solid (CCG 2 /DRT 3 ) background.In this paper the NLP strategies are presented for Hybrid Deep Machine Translation in the direction from English-to-Bulgarian. Under Hybrid MT we understand the usage of the automatic Moses system together with a rule-based component at the transfer phase.The paper is structured as follows:in section 2 the components of the hybrid MT architecture is presented. Section 3 discusses the deep semantic processing. Section 4 reports on the current experiments and results. Section 5 concludes the paper.
0
Endeavors to better understand transformer-based masked language models (MLMs), such as BERT, are ever growing since their introduction in 2017 (cf. Rogers et al. (2020) for an overview). While the BERTology movement has enhanced our knowledge on the reasons behind BERT's performance in various ways, still plenty remains unanswered. Less well studied and challenging are linguistic phenomena, where, besides contextual information, identification of an antecedent is needed, such as relative clauses (RCs). , e.g., analyzed BERT's comprehension of function words, showing how relativizers and prepositions are quite challenging for BERT. Similarly, find RCs to be difficult for BERT in the CoLA acceptability tasks. In this paper, we focus on RCs in American English to further enhance our understanding of the grammatical and semantic knowledge captured by pre-trained MLMs, evaluating three models: BERT, RoBERTa, and ALBERT. For our analysis, we train probing classifiers, consider each models' performance on diagnostic cases, and test predictions in a masked language modeling task on selected semantic and grammatical constraints of RCs.RCs are clausal post-modifiers specifying a preceding noun phrase (antecedent) and are introduced by a relativizer (e.g., which). Extensive corpus research (Biber et al., 1999) found that the overall most common relativizers are that, which, and who. The relativizer occupies the subject or object position in a sentence (see examples (1-a) and (1-b)). In subject RCs, the relativizer is obligatory (Huddleston and Pullum, 2002, 1055) , while in object position omission is licensed (e.g., zero in example (1-b)).(1) a. Children who eat vegetables are likely to be healthy. (subject relativizer, relativizer is obligatory) b. This is the dress [that/which/zero] I brought yesterday. (object relativizer, omission possible)Relativizer choice depends on an interplay of different factors. 1 Among these factors, the animacy constraint (Quirk, 1957) is near-categorical: for animate head nouns the relativizer who (see Example 1) is strongly prioritized (especially over which) (D'Arcy and Tagliamonte, 2010) .Our aims are (1) to better understand whether sentence representations of pre-trained MLMs capture grammaticality in the context of RCs, (2) test the generalization abilities and weaknesses of probing classifiers with complex diagnostic cases, and (3) test prediction of antecedents and relativizers in a masked task considering also linguistic constraints. From a linguistic perspective, we ask whether MLMs correctly predict (a) grammatically plausible relativizers given certain types of antecedents (animate, inanimate) and vice versa grammatically plausible antecedents given certain relativizers (who vs. which/that), and (2) semantically plausible antecedents given certain relativizers considering the degree of specificity of predicted antecedents in comparison to target antecedents (e.g. boys as a more specific option than children in Example (1)). Moreover, we are interested in how these findings agree with probing results and investigate model specific behavior, evaluating and comparing the recent pre-trained MLMs: BERT, RoBERTa, and ALBERT. This is to our knowledge the first attempt comparing and analyzing performance of different transformer-based MLMs in such detail, investigating grammatical and semantic knowledge beyond probing.Our main contributions are the following: (1) the creation of a naturalistic dataset for probing, (2) a detailed model comparison of three recent pre-trained MLMs, and (3) fine-grained linguistic analysis on grammatical and semantic knowledge. Overall, we find that all three MLMs show good performance on the probing task. Further evaluation, however, reveals model-specific issues with wrong agreement (where RoBERTa is strongest) and distance between antecedent-relativizer and relativizer-RC verb (on which BERT and ALBERT are better). Considering linguistic knowledge, all models perform better on grammatical rather than semantic knowledge. Out of the relativizers, which is hardest to predict. Considering model-specific differences, BERT outperforms the others in predicting the actual targets, while RoBERTa captures best grammatical and semantic knowledge. ALBERT performs worst overall.
0
The recent years have seen an increased interest as well as rapid progress in semantic parsing and surface realization based on graph-structured semantic representations, e.g. Abstract Meaning Representation (AMR; Banarescu et al., 2013) , Elementary Dependency Structure (EDS; Oepen and Lønning, 2006) and Depedendency-based Minimal Recursion Semantics (DMRS; Copestake, 2009) . Still underexploited is a formal framework for manipulating graphs that parallels automata, tranducers or formal grammars for strings and trees. Two such formalisms have recently been proposed and applied for NLP. One is graph grammar, e.g. Hyperedge Replacement Gram-mar (HRG; Ehrig et al., 1999) . The other is DAG automata, originally studied by Kamimura and Slutzki (1982) and extended by Chiang et al. (2018) . In this paper, we study DAG transducers in depth, with the goal of building accurate, efficient yet robust natural language generation (NLG) systems.The meaning representation studied in this work is what we call type-logical semantic graphs, i.e. semantic graphs grounded under type-logical semantics (Carpenter, 1997) , one dominant theoretical framework for modeling natural language semantics. In this framework, adjuncts, such as adjective and adverbal phrases, are analyzed as (higher-order) functors, the function of which is to consume complex arguments (Kratzer and Heim, 1998) . In the same spirit, generalized quantifiers, prepositions and function words in many languages other than English are also analyzed as higher-order functions. Accordingly, all the linguistic elements are treated as roots in type-logical semantic graphs, such as EDS and DMRS. This makes the typological structure quite flat rather than hierachical, which is an essential distinction between natural language semantics and syntax.To the best of our knowledge, the only existing DAG transducer for NLG is the one proposed by Quernheim and Knight (2012) . Quernheim and Knight introduced a DAG-to-tree transducer that can be applied to AMR-to-text generation. This transducer is designed to handle hierarchical structures with limited reentrencies, and it is unsuitable for meaning graphs transformed from type-logical semantics. Furthermore, Quernheim and Knight did not describe how to acquire graph recognition and transduction rules from linguistic data, and reported no result of practical generation. It is still unknown to what extent a DAG transducer suits realistic NLG.The design for string and tree transducers (Comon et al., 1997) focuses on not only the logic of the computation for a new data structure, but also the corresponding control flow. This is very similar the imperative programming paradigm: implementing algorithms with exact details in explicit steps. This design makes it very difficult to transform a type-logical semantic graph into a string, due to the fact their internal structures are highly diverse. We borrow ideas from declarative programming, another programming paradigm, which describes what a program must accomplish, rather than how to accomplish it. We propose a novel DAG transducer to perform graphto-program transformation ( §3). The input of our transducer is a semantic graph, while the output is a program licensed by a declarative programming language rather than linguistic structures. By executing such a program, we can easily get a surface string. This idea can be extended to other types of linguistic structures, e.g. syntactic trees or semantic representations of another language.We conduct experiments on richly detailed semantic annotations licensed by English Resource Grammar (ERG; Flickinger, 2000) . We introduce a principled method to derive transduction rules from DeepBank (Flickinger et al., 2012) . Furthermore, we introduce a fine-to-coarse strategy to ensure that at least one sentence is generated for any input graph. Taking EDS graphs, a variable-free ERS format, as input, our NLG system achieves a BLEU-4 score of 68.07. On average, it produces more than 5 sentences in a second on an x86 64 GNU/Linux platform with two Intel Xeon E5-2620 CPUs. Since the data for experiments is newswire data, i.e. WSJ sentences from PTB (Marcus et al., 1993) , the input graphs are quite large on average. The remarkable accuracy, efficiency and robustness demonstrate the feasibility of applying a DAG transducer to resolve NLG, as well as the effectiveness of our transducer design.
0
It~ecently many kinds of natural lauguage processing systems like machine translation systems have been developed and put into practical use, but ambiguity resolution ill translation and meaning interpretation is still the primary issue in such systems. These systems have conventionally adopted a rule-ba.~ed disambiguation method, using linguistic restrictions described logically in dictionary and grammar to select the suitable equivalent translation and meaning. Generally speaking, it is impossible to provide all the restrictions systematically in advance. Furthermore, such machine translation systems have suffered from inability to select the most suitable equivalent translation if the input expression meets two or more restrictions, and have difficulty in accepting any input expression that meets no restrictions.Ill order to overcome these difficulties, following methods .~r~ proposed these years: Still, each (ff them has inherent problems and is insufficient for ambiguity resolution. For example, either all e×amplc~b~mcd translation method or a statistics-based translation method needs a largescale database of translation exalnpl~, and it is difficult to collect all adequate amount of a bilingual corpus.In this paper, we propose a new method to select the suitable equivalent translation using the statistical data extracted independently from source and target language texts [Muraki 91]. The statistical data used here is linguistic statistics repre: senting the dependency degree on the pairs of expressions in each text, especially statistics for cooccurrence, i.e., how frequently the expressions cooccur in the Sallle seutence~ the sanle paragraph or tile same chapter of each text. The dependency relation in the source language is reflected in the translated text through bilingual dictionary by sc~ lecting the equivalent translation which ma.ximizes both statistics tot co-occurrence in tile source and targ(~t language text. Moreover, the method also provid~ the means to compute tile linguistic statistics on the pairs of meaning expressions. We call tlds method for equivalent translation and meaning selection DMAX Criteria (Double Maximize Criteria based on Dual Corpora).First, we make comments on the characteristics and the linfits of the conventional methods of ambiguity resolution in translation and meaning interpretation in the second section. Next, we describe the details of DMAX Criteria for equivalent translation selection in the third section. And last, we explain the means to compute the linguistic statistics on the pairs of meaning expressions.
0
Topic adaptation is used as a technique to adapt language models based on small contexts of information that may not necessarily reflect an entire domain or genre. In scenarios such as lecture translation, it is advantageous to perform language model adaptation on the fly to reflect topical changes in a discourse. In these scenarios, general purpose domain adaptation techniques fail to capture the nuances of discourse; while domain adaptation works well in modeling newspapers and government texts which contain a limited number of subtopics, the genres of lectures and speech may cover a virtually unbounded number of topics that change over time. Instead of general purpose adaptation, adaptation should be performed on smaller windows of context.Most domain adaptation techniques require the reestimation of an entire language model to leverage the use of out-of-domain corpora in the construction of robust models. While efficient algorithms exist for domain adaptation, they are in practice intended to adapt language models globally over a new translation task. Topic adaptation, on the other hand, intends to adapt language models as relevant contextual information becomes available. For a speech, the relevant contextual information may come in sub-minute intervals. Well-established and efficient techniques such as Mini-mum Discrimination Information adaptation [1, 2] are unable to perform topic adaptation in real-time scenarios for large order n-gram language models. In practice, new contextual information is likely to be available before techniques such as MDI have finished LM adaptation from earlier contexts. Thus spoken language translation systems are typically unable to use the state-of-the-art techniques for the purpose of topic adaptation.In this paper, we seek to apply MDI adaptation techniques in real-time translation scenarios by avoiding the computation of the normalization term that requires all ngrams to be re-estimated. Instead, we only wish to adapt n-grams that appear within an adaptation context. Dubbed "Lazy MDI", our technique uses the same unigram ratios as MDI, but avoids normalization by applying smoothing transformations based a sigmoid function that is added as a new feature to the conventional log-linear model of phrase-based statistical machine translation (SMT). We observe that Lazy MDI performs comparably to classic MDI in topic adaptation for SMT, but possesses the desired scalability features for real-time adaptation of large-order n-gram LMs.This paper is organized as follows: In Section 2, we discuss relevant previous work. In Section 3, we review MDI adaptation. In Section 4, we describe Lazy MDI adaptation for machine translation and review how unigram statistics of adaptation texts can be derived using bilingual topic modeling. In Section 5, we report adaptation experiments on TED talks 1 from IWSLT 2010 and 2012, followed by our conclusions and suggestions for future work in Section 6.
0
Scientific publications play an important role in dissemination of advances, and they are often reviewed and accepted by professionals in the domain before publication to maintain quality. In order to avoid unfairness due to identity, affiliation, and nationality biases, peer review systems have been studied extensively (Yankauer, 1991; Blank, 1991; Lee et al., 2013) , including analysis of the opinions of venue editors (Brown, 2007; Baggs et al., 2008) and evaluation of review systems (Yankauer, 1991; Tomkins et al., 2017) . It is widely believed that a possible solution for avoiding biases is to keep the author identity blind to the reviewers, called double-blind review, as opposed to only hiding the identity of the reviewers, as in single-blind review (Lee et al., 2013) . Since some personal information (e.g., author, affiliation and nationality) could implicitly affect the review results (Lee et al., 2013) , these procedures are required to keep them anonymous in double-blind review, but this is not foolproof. For example, experienced reviewers could identify some of the authors in a submitted manuscript from the context. In addition, the citation list in the submitted manuscript can be useful in identifying them (Brown, 2007) , but is indispensable as it plays an important role in the reviewing process to refer readers to related work and emphasize how the manuscript differs from the cited work.To investigate blindness in double-blind review systems, Hill and Provost (2003) and Payer et al. (2015) train a classifier to predict the authors, and analyze the results. However, they focus primarily on the utility of self-citations in the submitted manuscripts as a key to identification (Mahoney et al., 1978; Yankauer, 1991; Hill and Provost, 2003; Payer et al., 2015) , and do not take author's citation history beyond just self-citations into account. The experiment design in these studies is also limited: they use relatively small datasets, include papers only from a specific domain (e.g., physics (Hill and Provost, 2003) , computer science (Payer et al., 2015) or natural language processing (Caragea et al., 2019) ), and pre-select the set of papers and authors for evaluation (Payer et al., 2015; Caragea et al., 2019) . Furthermore, they focus on author identification, whereas knowing affiliation and the nationality also introduces biases in the reviewing process (Lee et al., 2013) .In this paper, we use the task of author identity, affiliation, and nationality predictions to analyze the extent to which citation patterns matter, evaluate our approach on large-scale datasets in many domains, and provide detailed insights into the ways in which identity is leaked. We describe the following contributions: 1. We propose approaches to identify the aspects of the citation patterns that enable us to guess the authors, affiliations, and nationalities accurately. To the best of our knowledge, this is the first study to do so. Though related studies mainly suggest authors avoid self-citations for increasing anonymity of submitted papers, we show that overlap between the citations in the paper and the author's previous citations is an incredibly strong signal, even stronger than self-citations in some settings. 2. Our empirical study is performed on (i) a realworld large-scale dataset with various fields of study (computer science, engineering, mathematics, and social science), (ii) study different relations between papers and authors, and (iii) two identification situations: "guess-at-leastone" and "cold start". For the former, we identify authors, affiliations and nationalities of the affiliations with 40.3%, 47.9% and 86.0% accuracy respectively, from the top-10 guesses. For the latter, we focus on papers whose authors are not "guessable", and find that the nationalities are still identifiable. 3. We perform further analysis on the results to answer some common questions on blind-review systems: "Which authors are most identifiable in a paper?", "Are prominent affiliations easier to identify?", and "Are double-blind reviewed papers more anonymized than single-blind?". One of the interesting findings is that 93.8% of test papers written by a prominent company can be identified with top-10 guesses. The dataset used in this work is publicly available, and the complete source code for processing the data and running the experiments is also available. 2
0
Human communication, in real-life situations, is multimodal (Kress, 2010) : To convey and understand a message uttered in natural language, people build on what is present in the multimodal context surrounding them. As such, speakers do not need to "repeat" something that is already provided by the environment; similarly, listeners leverage information from various modalities, such as vision, to interpret the linguistic message. Integrating information from multiple modalities is indeed crucial for attention and perception (Partan and Marler, 1999) since combined information from concurrent modalities can give rise to different messages (McGurk and MacDonald, 1976) .The argument that language and vision convey different, possibly complementary aspects of meaning has been largely made to motivate the need for multimodal semantic representations of words (Ba-roni, 2016; Beinborn et al., 2018) . However, computational approaches to language and vision typically do not fully explore this complementarity. To illustrate, given an image (e.g., the one depicted in Figure 1 ), popular tasks involve describing it in natural language, e.g., "A tennis player about to hit the ball" (Image Captioning; see Bernardi et al., 2016) ; answering questions that are grounded in it, e.g., Q: "What sport is he playing?", A: "Tennis" (Visual Question Answering; see Antol et al., 2015) ; having a dialogue on its entities, e.g., Q: "Is the person holding a racket?", A: "Yes." (visuallygrounded dialogue; see De Vries et al., 2017; Das et al., 2017) . While all these tasks challenge models to perform visual grounding, i.e., an effective alignment of language and vision, none of them require a genuine combination of complementary information provided by the two modalities. All the information is fully available in the visual scene, and language is used to describe or retrieve it.In this work, we propose a novel benchmark, Be Different to Be Better (in short, BD2BB), where the different, complementary information provided by the two modalities should push models develop a better, richer multimodal representation. As illustrated in Figure 1 , models are asked to choose, among a set of candidate actions, the one a person who sees the visual context depicted by the image would do based on a certain intention (i.e., their goal, attitude or feeling). Crucially, the resulting multimodal input (the sum of the image and the intention) will be richer compared to that conveyed by either modality in isolation; in fact, the two modalities convey complementary or nonredundant information (Partan and Marler, 1999) .To illustrate, a model that only relies on the (nongrounded) linguistic information conveyed by the intention, i.e., "If I have tons of energy", might consider as equally plausible any actions that have to do with playing a sport, e.g., "I will play base-ill la ba eball i h he men ill la a game of e i i h he a ill compare image of me hi ing he e i ball ill la ba eball i h he omen ill appla d m fa o ri e e i la e of all ime f ha e on of ene gCAND DATE ACT ONS Figure 1 : One real sample of our proposed task. Given an image depicting, e.g., a tennis player during a match and the intention "If I have tons of energy", the task involves choosing, from a list of 5 candidate actions, the target action that unequivocally applies to the combined multimodal input: "I will play a game of tennis with the man". The task is challening: a model exploiting a language or vision bias could fall into the trap of decoy actions containing words highlighted in blue or orange, respectively. Therefore, selecting the target action requires models perform a genuine integration of the two modalities, whose information is complementary. Best viewed in color.ball with the men" or "I will play a game of tennis with the man". Similarly, a model that only relies on the visual information conveyed by the imagea tennis player during a match-might consider as equally plausible any actions that have to do with 'tennis' and/or 'player', e.g., "I will applaud my favourite tennis player of all time" or "I will play a game of tennis with the man". In contrast, a model that genuinely combines information conveyed by both modalities should be able to select the target action, namely the only one that is both consistent with the intention and grounded in the image, i.e., "I will play a game of tennis with the man". Moreover, similarly to real-life communicative scenarios, in our approach different language inputs modulate differently the same visual context, and this gives rise to various multimodal messages. To illustrate, if the image in Figure 1 is paired with the intention "If I am tired watching", the target action "I will play a game of tennis with the man" is no longer valid. Indeed, the target action in this context is "I will leave the tennis court" (see Figure 3 ). Our work has the following key contributions:• We introduce a novel multimodal benchmark: the set of ∼ 10K image, intention, action datapoints collected via crowdsourcing and enriched with meta-annotation; the multiple choice task, BD2BB, which requires proper integration of language and vision and is specifically aimed at testing SoA pretrained multimodal models. The benchmark, together with the code and trained models, is available at:https://sites.google.com/view/bd2bb• We test various models (including the SoA multimodal, transformer-based LXMERT; Tan and Bansal, 2019) and show that, while BD2BB is a relatively easy task for humans (∼ 80% acc.), best systems struggle to achieve a similar performance (∼ 60% acc.).• We extensively analyze the results and show the advantage of exploiting multimodal pretrained representations. This confirms they are effective, but not enough to solve the task.
0
Transliteration is the transformation of a piece of text from one language's writing system into another. Since the transformation is mostly explained as local substitutions, deletions, and insertions, we treat word transliteration as a sequence labeling problem (Ganesh et al., 2008; Reddy and Waxmonsky, 2009) , using linear-chain conditional random fields as our model (Lafferty et al., 2001; Sha and Pereira, 2003) . We tailor this model to the transliteration task in several ways.First, for the Arabic-English task, each Arabic input is paired with multiple valid English transliteration outputs, any of which is judged to be correct. To effectively exploit these multiple references during learning, we use a training objective in which the model may favor some correct transliterations over the others. Computationally efficient inference is achieved by encoding the references in a lattice.Second, inference for our first-order sequence labeling model requires a runtime that is quadratic in the number of labels. Since our labels are character n-grams in the target language, we must cope with thousands of labels. To make the most of each inference call during training, we apply a mini-batch training algorithm which converges quickly.Finally, we wish to consider some global features that would render exact inference intractable. We therefore use a reranking model (Collins, 2000) .We demonstrate the performance benefits of these modifications on the Arabic-English transliteration task, using the open-source library cdec (Dyer et al., 2010) 1 for learning and prediction.
0
Since OWL (Web Ontology Language) was adopted as a standard in 2004, researchers have sought ways of mediating between the (decidedly cumbersome) raw code and the human users who aspire to view or edit it. Among the solutions that have been proposed are more readable coding formats such as Manchester OWL Syntax (Horridge et al., 2006) , and graphical interfaces such as Protégé (Knublauch et al., 2004) ; more speculatively, several research groups have explored ways of mapping between OWL and controlled English, with the aim of presenting ontologies (both for viewing and editing) in natural language (Schwitter and Tilbrook, 2004; Sun and Mellish, 2006; Kaljurand and Fuchs, 2007; Hart et al., 2008) . In this paper we uncover and test some assumptions on which this latter approach is based.Historically, ontology verbalisation evolved from a more general tradition (predating OWL and the Semantic Web) that aimed to support knowledge formation by automatic interpretation of texts authored in Controlled Natural Languages (Fuchs and Schwitter, 1995) . The idea is to establish a mapping from a formal language to a natural subset of English, so that any sentence conforming to the Controlled Natural Language (CNL) can be assigned a single interpretation in the formal language -and conversely, any wellformed statement in the formal language can be realised in the CNL. With the advent of OWL, some of these CNLs were rapidly adapted to the new opportunity: part of Attempto Controlled English (ACE) was mapped to OWL (Kaljurand and Fuchs, 2007) , and Processable English (PENG) evolved to Sydney OWL Syntax (SOS) (Cregan et al., 2007) . In addition, new CNLs were developed specifically for editing OWL ontologies, such as Rabbit (Hart et al., 2008) and Controlled Language for Ontology Editing (CLOnE) (Funk et al., 2007) .In detail, these CNLs display some variations: thus an inclusion relationship between the classes Admiral and Sailor would be expressed by the pattern 'Admirals are a type of sailor' in CLOnE, 'Every admiral is a kind of sailor' in Rabbit, and 'Every admiral is a sailor' in ACE and SOS. However, at the level of general strategy, all the CNLs rely on the same set of assumptions concerning the mapping from natural to formal language; for convenience we will refer to these assumptions as the consensus model. In brief, the consensus model assumes that when an ontology is verbalised in natural language, axioms are expressed by sentences, and atomic terms are expressed by entries from the lexicon. Such a model may fail in two ways: (1) an ontology might contain axioms that cannot be described transparently by a sentence (for instance, because they contain complex Boolean expressions that lead to structural ambiguity); (2) it might contain atomic terms for which no suitable lexical entry can be found. In the remainder of this paper we first describe the consensus model in more detail, then show that although Logic OWL C D IntersectionOf(C D) ∃P.C SomeValuesFrom(P C) C D SubClassOf(C D) a ∈ C ClassAssertion(C a) [a, b] ∈ P PropertyAssertion(P a b)
0
Automatic PI is the task of detecting if two texts convey the same meaning. For example, the following two sentences from the Microsoft Research Paraphrase Corpus (MSRP) (Dolan et al., 2004) :S 1a :Although it's unclear whether Sobig was to blame, The New York Times also asked employees at its headquarters yesterday to shut down their computers because of "system difficulties." S 1b : The New York Times asked employees at its headquarters to shut down their computers yesterday because of "computing system difficulties."are paraphrases, while these other two are not: * Professor at DISI, University of Trento. S 2a : Dr. Anthony Fauci, director of the National Institute of Allergy and Infectious Diseases, agreed. S 2b : "We have been somewhat lucky," said Dr. Anthony Fauci, director of the National Institute of Allergy and Infectious Diseases.Most previous work on automatic PI, e.g., (Madnani et al., 2012; Socher et al., 2011) , is based on a direct comparison between the two texts, exploiting different similarity scores into a machine learning framework. However, these methods consider sentences as monolithic units and can thus be misled by ancillary information that does not modify the main meaning expressed in the text.For example, the additional text fragment (ATF), "Although it's unclear whether Sobig was to blame", from S 1a expresses ancillary information, which does not add much to the message of S 1b , thus the sentences are considered paraphrases. In contrast, S 2b contains the ATF, "We have been somewhat lucky", whose meaning is not linked to any constituent of S 1b . Since such text expresses relevant information, the two sentences are not considered paraphrases.In this paper, we study and design models for extracting ATFs from a sentence with respect to another one and classifying if their meaning is ancillary or important. For this purpose, we built a corpus of sentence pairs using MSRP, where at least one pair member always contains ATFs. We use SVMs with tree kernels applied to syntactic representations (Severyn and Moschitti, 2012) of ATFs for learning automatic ATCs.The results derived on MSRP show (i) a promising accuracy of our ATC and (ii) the output of ATC can be used as a feature for improving the state-ofthe-art PI model.
0
The concept of "translating" an error sentence into a correct one was first researched by Brockett et al. (2006) . They proposed a statistical machine translation (SMT) system with noisy channel model to correct automatically erroneous sentences for learners of English as a Second Language (ESL).It seems that a statistical machine translation toolkit has become increasingly popular for grammatical error correction. In the CoNLL-2014 shared task on English grammatical error correction (Ng et al., 2014) , four teams of 13 participants each used a phrase-based SMT system. Grammatical error correction using a phrasebased SMT system can be improved by tuning using evaluation metrics such as F 0.5 (Kunchukuttan et al., 2014; Wang et al., 2014) or even a combination of different tuning algo-rithms (Junczys-Dowmunt and Grundkiewicz, 2014) . In addition, SMT can be merged with other methods. For example, the language modelbased and rule-based methods can be integrated into a single sophisticated but effective system (Felice et al., 2014) .For Chinese, SMT has also been used to correct spelling errors (Chiu et al., 2013) . Furthermore, as is shown in NLP-TEA-1, an SMT system can be applied to Chinese grammatical error correction if we can employ a large-scale learner corpus (Zhao et al., 2014) .In this study, we extend our previous system (Zhao et al., 2014) to the NLP-TEA-2 shared task on Chinese grammatical error diagnosis, which is based on SMT. The main contribution of this study is as follows: We investigate the hierarchical phrasebased model (Chiang et al., 2005) and determine that it yields higher recall and thus F score than does the phrase-based model, but is less accurate. We increase our Chinese learner corpus by web scraping (Yu et al., 2012; Cheng et al., 2014) and show that the greater the size of the learner corpus, the better the performance. We perform minimum error-rate training (Och, 2003) using several evaluation metrics and demonstrate that tuning improves the final F score.
0
Large-scale neural language models have made great strides in a series of language generation tasks such as machine translation (Bahdanau et al., 2014; Vaswani et al., 2017; Raffel et al.) , text summarization (See et al., 2017; Lewis et al., 2019; Zhang et al., 2019a) , conversational dialog generation (Serban et al., 2015; Lowe et al., 2017; Roller et al., 2020; Zhang et al., 2020) , etc.However, despite the successes achieved by these models on several conditional generation tasks, they continue to suffer from degenerate behaviors such as repetition, a lack of diversity, dullness, and, incoherence, especially in open-ended generation settings such as text completion and dialog modeling (Holtzman et al., 2019) . This degenerate behavior is often attributed to a mismatch between the maximum likelihood training and gen- * A part of this work was done when the author was an intern at Borealis AI.† During a part of this work, the author was an employee at Borealis AI.‡ During a part of this work, the author was an Academic Advisor at Borealis AI.1 Source code to reproduce these experiments is available at https://github.com/kushalarora/ quantifying_exposure_bias. eration procedure mismatch (Welleck et al., 2019; Choi et al., 2020; Li et al., 2016) .Maximum likelihood training also referred to as teacher forcing (Williams and Zipser, 1989) , factorizes the language model as a linear chain, and maximizes the log-likelihood of this factorized language model on a training corpus. During this maximum likelihood training, the model learns a distribution of the next tokens conditioned on the contexts from the ground-truth training data.A concern with the MLE-based training is that the ground-truth contexts from the training corpus are not available during generation. Rather, the conditioning contexts during this phase comprise of tokens previously generated by the model itself. The distribution of these contexts seen during the generation phase might be very different from the ones encountered during the training phase. This mismatch is referred to as exposure bias (Ranzato et al., 2016; .A side-effect of exposure bias is that an error at any step during generation might have a cascading effect as the next context will incorporate this erroneous prediction, deviating away from the ground truth context distribution leading to more errors. These errors will result in sequences that degenerate over the sequence length resulting in incoherent text, lack of vocabulary diversity, and detachment from the source sequence resulting in hallucination, and/or word-and phrase-level repetition.There is an active debate in the language generation community on the impact of exposure bias in language generation. Authors have both validated (Xu et al., 2019; Zhang et al., 2019b) and questioned (He et al., 2019) the impact of exposure bias on language generation. Several approaches have been proposed to mitigate exposure bias (Ranzato et al., 2016; Shen et al., 2016; Bahdanau et al., 2017; Leblond et al., 2018; Welleck et al., 2019) but these have neither formalized exposure bias clearly nor provide any empiri-cal evidence that these methods mitigate the effect of exposure bias. Finally, previous works have linked exposure bias to out-of-domain (Wang and Sennrich, 2020) and out of distribution (Schmidt, 2019) generalization, and hallucinations (Wang and Sennrich, 2020) but these claims remain weak in absence of a clear and principled formalization of the exposure bias issue.In this paper, we attempt to clarify this confusion by formalizing exposure bias in the terms of accumulation of errors and analyzing its impact on generation quality. We do this by providing a theoretically-grounded understanding of the exposure bias issue by analyzing it from an imitation learning perspective. We use this perspective to show that behavior cloning-an imitation learning algorithm is equivalent to teacher forcing under the choice of a particular loss function. We then exploit this equivalence to borrow the bound on error accumulation caused by behavior cloning and use it to quantify exposure bias and analyze the error accumulation in language generation.Finally, we use this quantifiable definition of exposure bias to demonstrate that models trained using teacher forcing do suffer from an accumulation of errors. We also show, both analytically and empirically, why perplexity fails to capture this error accumulation, and how a lower exposure bias correlates with better generation quality.
0
Recent language models (LMs) such as BERT and its successors are remarkable at memorizing knowledge seen frequently during training, however performance degrades over the long tail of rare facts. Given the importance of factual knowledge for tasks such as questionanswering, search, and personal assistants (Bernstein et al., 2012; Poerner et al., 2020; Orr et al., 2020) , there has been significant interest in injecting these base LMs with factual knowledge about entities (Zhang et al., 2019; Peters et al., 2019, inter alia.) . In this work, we work we propose a simple and effective approach for enhancing LMs with knowledge, called metadata shaping.Existing methods to capture entity knowledge more reliably, typically use the following steps: first annotating natural language text with entity metadata, and next modifying the base LM model to learn from the tagged data. Entity metadata is obtained by linking substrings of text to entries in a knowledge base such as Wikidata, which stores entity IDs, types, descriptions, and relations. Model modifications include introducing continuous vector representations for entities or auxiliary objectives (Zhang et al., 2019; Peters et al., 2019; Yamada et al., 2020; Xiong et al., 2020; Joshi et al., 2020a; Su et al., 2021) . Other methods combine multiple learned modules, which are each specialized to handle fine-grained reasoning patterns or subsets of the data distribution (Chen et al., 2019; Wang et al., 2021) .These knowledge-aware LMs have led to impressive gains compared to base LMs on entityrich tasks. That said, the new architectures are often designed by human experts, costly to pretrain and optimize, and require additional training as new entities appear. Further, these LMs may not use the collected entity metadata effectively -Wikidata alone holds over ∼ 100M unique entities, however many of these entities fall under similar categories, e.g., "politician" entities. Intuitively, if unseen entities encountered during inference share metadata with entities observed during training, the LM trained with this information may be able to better reason about the new entities using patterns learned from similar seen entities. However, the knowledge-aware LMs learn from individual entity occurrences rather than learning these shared reasoning patterns. Implicitly learning entity similarities for 100M entities may be challenging since 89% of the Wikidata entities do not appear in Wikipedia, a popular source of unstruc- Figure 1 : Metadata shaping inserts metadata (e.g., entity types and descriptions) strings into train and test examples. The FewRel benchmark involves identifying the relation between a subject and object string. The above subject and object are unseen in the FewRel training data and the tuned base LM reflects low attention weights on those words. A base LM trained with shaped data reflects high attention weights on useful metadata words such as "politician". Weights are shown for words which are not stop-words, punctuation, or special-tokens. tured training data for the LMs, at all. 1 We thus ask, to what extent can we match the quality of knowledge-aware LM architectures using the base LM itself? We find that applying some simple modifications to the data at train and test time, a method we call metadata shaping, is surprisingly quite effective. Given unstructured text, there are several readily available tools for generating entity metadata at scale (e.g., Manning et al. (2014) ; Honnibal et al. (2020)), and knowledge bases contain entity metadata including type tags (e.g., Barack Obama is a "politician") and descriptions (e.g., Barack Obama "enjoys playing basketball"). Our method entails explicitly inserting retrieved entity metadata in examples as in Figure 1 and inputting the resulting shaped examples to the LM. Our contributions are:Simple and Effective Method We propose metadata shaping and demonstrate its effectiveness on standard benchmarks that are used to evaluate knowledge-aware LMs. Metadata shaping, with simply an off-the-shelf base LM, exceeds the base LM trained on unshaped data by by an average of 4.3 F1 points and is competitive to state-of-theart methods, which do modify the LM. Metadata shaping thus enables re-using well-studied and optimized base LMs (e.g., ).We show that metadata shaping improves tail performance -the observed gain from shaping is on average 4.4x larger for the slice of examples containing tail entities than for the slice containing popular entities. Metadata establish "subpopulations", groups of entities sharing similar properties, in the entity distribution (Zhu et al., 2014; Cui et al., 2019; Feldman, 2020) . For example on the FewRel benchmark (Han et al., 2018) , "Daniel Dugléry" (a French politician) appears 0 times, but "politician" entities in general appear > 700 times in the task training data. Intuitively, performance on a rare entity should improve if the LM has the explicit information that it is similar to other entities observed during training.Explainability Existing knowledge-aware LMs use metadata (Peters et al., 2019; Alt et al., 2020) , but do not explain when and why different metadata help. Inspired by classic feature selection techniques (Guyon and Elisseeff, 2003) , we conceptually explain the effect of different metadata on generalization error.We hope this work motivates further research on addressing the tail challenge through the data. 2
0
Incremental processing formalisms have increasing importance due to the growing ubiquity of spoken dialogue systems that require understanding and generation in real-time using rich, robust semantics. Dialogue systems benefit from incremental processing in terms of shorter response time to the user's requests when the dialogue system can start interpreting and serving the request (e.g. by consulting databases, doing reference resolution, backchannelling or starting to generate an answer (Aist et al., 2007; Schuler et al., 2009; Skantze and Schlangen, 2009) ) before the request is fully stated. Another use of formalisms that support strict incrementality is psycholinguistic modelling: As there is a substantial amount of evidence that human sentence processing is highly incremental, computational models of human sentence processing should be incremental to the same degree. Such models can then be used to calculate measures of human sentence processing difficulty, such as surprisal, which have been demonstrated to correspond to reading times (e.g., Levy, 2008; .Two strictly incremental versions of treeadjoining grammar (TAG; Joshi et al., 1975) which have been proposed in recent years are DV-TAG (Mazzei et al., 2007) and PLTAG (Demberg-Winterfors, 2010) . Incremental syntax is however only of limited interest without a corresponding mechanism for calculating the incremental semantic interpretation. And for that semantic model to be practically useful in psycholinguistic modelling or NLP applications such as speech recognition or dialogue systems, we believe that the semantic representation should ideally be simple, flat and usefully underspecified, in order to be used in the future in a context of compositional distributional semantics. We propose a framework in which semantic expressions are built synchronously with the syntactic tree. Simple rules are used to integrate an elementary tree's semantic expression with the semantic expression of the prefix tree at each stage. The semantic contribution of the new elementary tree is thereby added to the semantic output expression in a manner that reflects closely the order in which semantic material has arrived. The necessary semantic annotation of elementary trees can be obtained from subcategorization frame information (PropBank, FrameNet) . We use a Neo-Davidsonian eventbased semantics with minimal recursion.Integrating incremental syntactic analysis with a framework of incremental semantic interpretation will allow one to model processing phenomena such as the decreased processing difficulty (1-b) (after Steedman, 2000) in comparison to (1-a) by downranking the main verb analysis of sent when the subject (like flowers) is unlikely to fill the sender role.(1) a. The doctor sent for the patient arrived.b. The flowers sent for the patient arrived.Incrementally generating the semantic interpretation requires the underspecification of the output semantics given the syntax, such as underspecifying the number of arguments of a verb or (to a greater extent than for non-incremental deviations, as we will discuss below) the scope of quantifiers. This paper sets forth the initial proposal for this semantic formalism in terms of underlying desiderata, principles, and basic use cases. It provides one example derivation, and it outlines a way of dealing with the question of scope ambiguities, an issue which affects a number of aspects of the theoretical plausibility of a semantic formalism.
0
Statistical machine translation (SMT) systems are heavily dependent on parallel data. SMT doesn't work well when fewer than several million lines of bitext are available (Kolachina et al., 2012) . When the available bitext is small, statistical models perform poorly due to the sparse word and phrase counts that define their parameters. Figure 1 gives a learning curve that shows this effect. As the amount of bitext approaches zero, performance drops drastically. In this thesis, we seek to modify the SMT model to reduce its dependence on parallel data and, thus, enable it to apply to new language pairs.Specifically, we plan to address the following challenges that arise when using SMT systems in low resource conditions: formance on the Spanish to English translation task increases with increasing amounts of parallel data. Performance is measured with BLEU and drops drastically as the amount of bitext approaches zero. These results use the Europarl corpus and the Moses phrase-based SMT framework, but the trend shown is typical.• Translating unknown words. In the context of SMT, unknown words (or out-of-vocabulary, OOV) are defined as having never appeared in the source side of the training parallel corpus. When the training corpus is small, the percent of words which are unknown can be high.• Inducing phrase translations. In high resource conditions, a word aligned bitext is used to extract a list of phrase pairs or translation rules which are used to translate new sentences. With more parallel data, this list is increasingly comprehensive. Using multi-word phrases instead of individual words as the basic translation unit has been shown to increase translation performance (Koehn et al., 2003) . However, when the parallel corpus is small, so is the number of phrase pairs that can be extracted.• Estimating translation probabilities. In the standard SMT pipeline, translation probabilities are estimated using relative frequency counts over the training bitext. However, when the bitext counts are sparse, probability esti- My thesis focuses on translating into English. We assume access to a small amount of parallel data, which is realistic, especially considering the recent success of crowdsourcing translations (Zaidan and Callison-Burch, 2011; Ambati, 2011; Post et al., 2012) . Additionally, we assume access to larger monolingual corpora. Table 1 lists the 22 languages for which we plan to perform translation experiments, along with the total amount of monolingual data that we will use for each. We use web crawled time-stamped news articles and Wikipedia for each language. We have extracted the Wikipedia pages which are inter-lingually linked to English pages.
0
Semantic relations between entities are essential for many NLP applications such as question answering, textual inference and information extraction (Ravichandran and Hovy, 2002; Szpektor et al., 2004) . Therefore, it is important to build a comprehensive knowledge base consisting of instances of semantic relations (e.g., authorOf) such as authorOf ⟨Franz Kafka, The Metamorphosis⟩. To recognize these instances in a corpus, we need to obtain patterns (e.g., "X write Y") that signal instances of the semantic relations.For a long time, many researches have targeted at extracting instances and patterns of specific relations (Riloff, 1996; Pantel and Pennacchiotti, 2006; De Saeger et al., 2009) . In recent years, to acquire a wider range knowledge, Open Information Extraction (Open IE) has received much attention (Banko et al., 2007) . Open IE identifies relational patterns and instances automatically without predefined target relations (Banko et al., 2007; Wu and Weld, 2010; Fader et al., 2011; Mausam et al., 2012) . In other words, Open IE acquires knowledge to handle open domains. In Open IE paradigm, it is necessary to enumerate semantic relations in open domains and to learn mappings between surface patterns and semantic relations. This task is called unsupervised relation extraction (Hasegawa et al., 2004; Shinyama and Sekine, 2006; Rosenfeld and Feldman, 2007) .A common approach to unsupervised relation extraction builds clusters of patterns that represent the same relation (Hasegawa et al., 2004; Shinyama and Sekine, 2006; Yao et al., 2011; Min et al., 2012; Rosenfeld and Feldman, 2007; Nakashole et al., 2012) . In brief, each cluster includes patterns corresponding to a semantic relation. For example, consider three patterns, "X write Y", "X is author of Y" and "X is located in Y". When we group these patterns into clusters representing the same relation, patterns "X write Y" and "X is author of Y" form a cluster representing the relation authorOf, and the pattern "X is located in Y" does a cluster for locate-dIn. In order to obtain these clusters, we need to know the similarity between patterns. The better we model the similarity of patterns, the better a clustering result correspond to semantic relations. Thus, the similarity computation between patterns is crucial for unsupervised relation extraction.We have two major challenges in computing the similarity of patterns. First, it is not clear how to represent the semantic meaning of a relational pattern. Previous studies define a feature space for patterns, and express the meaning of patterns by using such as the co-occurrence statistics between a pattern and an entity pair, e.g., co-occurrence frequency and pointwise mutual information (PMI) (Lin and Pantel, 2001) . Some studies employed vector representations of a fixed dimension, e.g., Principal Component Analysis (PCA) (Collins et al., 2002) and Latent Dirichlet Allocation (LDA) (Yao et al., 2011; Riedel et al., 2013) . However, the previous work did not compare the effectiveness of these representations when applied to a collection of large-scaled unstructured texts.Second, we need design a method scalable to a large data. In Open IE, we utilize a large amount of data in order to improve the quality of unsupervised relation extraction. For this reason, we cannot use a complex and inefficient algorithm that consumes the computation time and memory storage. In this paper, we explore methods for computing pattern similarity of good quality that are scalable to huge data, for example, with several billion sentences. In order to achieve this goal, we utilize approximate frequency counting and dimension reduction. Our contributions are threefold.• We build a system for unsupervised relation extraction that is practical and scalable to large data.• Even though the proposed system introduces approximations, we demonstrate that the system exhibits the performance comparable to the one without approximations.• Comparing several representations of pattern vectors, we discuss a reasonable design for representing the meaning of a pattern.
0
One of the key advantages of word embeddings for natural language processing is that they enable generalization to words that are unseen in labeled training data, by embedding lexical features from large unlabeled datasets into a relatively low-dimensional Euclidean space. These low-dimensional embeddings are typically trained to capture distributional similarity, so that information can be shared among words that tend to appear in similar contexts. However, it is not possible to enumerate the entire vocabulary of any language, and even large unlabeled datasets will miss terms that appear in later applications. The issue of how to handle these out-of-vocabulary (OOV) words poses challenges for embedding-based methods. These challenges are particularly acute when working with lowresource languages, where even unlabeled data may be difficult to obtain at scale. A typical solution is to abandon hope, by assigning a single OOV embedding to all terms that do not appear in the unlabeled data.We approach this challenge from a quasigenerative perspective. Knowing nothing of a word except for its embedding and its written form, we attempt to learn the former from the latter. We train a recurrent neural network (RNN) on the character level with the embedding as the target, and use it later to predict vectors for OOV words in any downstream task. We call this model the MIMICK-RNN, for its ability to read a word's spelling and mimick its distributional embedding.Through nearest-neighbor analysis, we show that vectors learned via this method capture both word-shape features and lexical features. As a result, we obtain reasonable near-neighbors for OOV abbreviations, names, novel compounds, and orthographic errors. Quantitative evaluation on the Stanford RareWord dataset (Luong et al., 2013) provides more evidence that these character-based embeddings capture word similarity for rare and unseen words.As an extrinsic evaluation, we conduct experiments on joint prediction of part-of-speech tags and morphosyntactic attributes for a diverse set of 23 languages, as provided in the Universal Dependencies dataset (De Marneffe et al., 2014) . Our model shows significant improvement across the board against a single UNK-embedding backoff method, and obtains competitive results against a supervised character-embedding model, which is trained end-to-end on the target task. In low-resource settings, our approach is particularly effective, and is complementary to supervised character embeddings trained from labeled data. The MIMICK-RNN therefore provides a useful new tool for tagging tasks in settings where there is limited labeled data. Models and code are available at www.github.com/ yuvalpinter/mimick .
0
Essay writing is a common task evaluated in schools and universities. In this task, students are typically given a prompt or essay topic to write about. Essay writing is included in high-stakes assessments, such as Test of English as a Foreign Language (TOEFL) and Graduate Record Examination (GRE). Manually grading all essays takes a lot of time and effort for the graders. This is what Automated Essay Scoring (AES) systems are trying to alleviate.Automated Essay Scoring uses computer software to automatically evaluate an essay written in an educational setting by giving it a score. Work related to essay scoring can be traced back to 1966 when Ellis Page created a computer grading software called Project Essay Grade (PEG). Research on AES has continued through the years.The recent Automated Student Assessment Prize (ASAP) Competition 1 sponsored by the Hewlett Foundation in 2012 has renewed interest on this topic. The agreement between the scores assigned by state-of-the-art AES systems and the scores assigned by human raters has been shown to be relatively high. See Shermis and Burstein (2013) for a recent overview of AES.AES is usually treated as a supervised machine learning problem, either as a classification, regression, or rank preference task. Using this approach, a training set in the form of human graded essays is needed. However, human graded essays are not readily available. This is perhaps why research in this area was mostly done by commercial organizations. After the ASAP competition, research interest in this area has been rekindled because of the released dataset.Most of the recent AES related work is promptspecific. That is, an AES system is trained using essays from a specific prompt and tested against essays from the same prompt. These AES systems will not work as well when tested against a different prompt. Furthermore, generating the training data each time a new prompt is introduced will be costly and time consuming.In this paper, we propose domain adaptation as a solution to this problem. Instead of hiring people to grade new essays each time a new prompt is introduced, domain adaptation can be used to adapt the old prompt-specific system to suit the new prompt. This way, a smaller number of training essays from the new prompt is needed. In this paper, we propose a novel domain adaptation technique based on Bayesian linear ridge regression.The rest of this paper is organized as follows. In Section 2, we give an overview of related work on AES and domain adaptation. Section 3 describes the AES task and the features used. Section 4 presents our novel domain adaptation algorithm.Section 5 describes our data, experimental setup, and evaluation metric. Section 6 presents and discusses the results. We conclude in Section 7.
0
Native Language Identification (NLI), in which an author's first language is derived by analyzing texts written in his or her second language, is often treated as a text classification problem. NLI has proven useful in various applications, including in language-learning settings. As it is wellestablished that a speaker's first language informs mistakes made in a second language, a system that can identify a learner's first language is better equipped to provide learner-specific feedback and identify likely problem areas.The Treebank of Learner English (TLE) is the first publicly available syntactic treebank for English as a Second Language (Berzak et al., 2016) . One particularly interesting feature of the TLE is This work is licenced under a Creative Commons Attribution 4.0 International Licence. Licence details: http: //creativecommons.org/licenses/by/4.0/ its incorporation of an annotation scheme for a consistent syntactic representation of grammatical errors. This annotation system has the potential to be useful to native language identification, as the ability to parse ungrammatical and atypical dependency relations could improve the informativeness of dependency-based features in such a classification task.Assessing this potential has been accomplished by training a parser on the original treebank and using it to extract dependency relations in a learner English corpus. Those dependency relations were then used as features in a machine learning classification task. The success of this classification was then assessed by comparing the results to a classification on features extracted by a parser trained on the error-corrected version of the treebank, based on the assumption that the original version of the treebank will more accurately handle grammatical errors in learner texts. This is a novel approach in that other similar experiments have used dependency parsers trained on grammatical treebanks to extract dependency relations.We found that using the original version of the corpus gave slightly better results on native language classification than using the error-corrected version. However, when we investigated parsing results, the original version gave much lower results on parsing both for original and errorcorrected texts. This seems to suggest that there is useful information in the types of errors made by this parser.
0
This paper presents the current status of development and the main motivations of an opensource shallow-transfer machine translation (MT) engine for the Romance languages of Spain (the main ones being Spanish (es), Catalan (ca) and Galician 1 (gl)) as part of a larger governmentfunded project which will also include MT engines for non-Romance languages such as 1 Most scholars consider Galician and Portuguese (pt) the same language; however, the official orthography of Galician is very different from the ones used for European and Brazilian Portuguese. Therefore, while grammatical resources will be rather reusable, lexical resources will not easily be.Basque (eu) and involving four universities and three linguistic technology enterprises. 2 The shallow-transfer architecture will also be suitable for other pairs of closely related languages which are not Romance, for example, Czech-Slovak, Danish-Swedish, etc.The multilingual nature of Spain is recognized, to a varying extent, in laws and regulations corresponding to the various levels of go-2 TALP (Universitat Politècnica de Catalunya), SLI (Universidade de Vigo), Transducens (Universitat d'Alacant), IXA (Euskal Herriko Unibertsitatea), ima-xin|software (Santiago de Compostela), Elhuyar Fundazioa (Usurbil), and Eleka Ingeniaritza Linguistikoa (Usurbil, coordinator) . vernment (the Constitution of Spain and the Statutes of Autonomy granted to Aragon, the Balearic Islands, Catalonia and Valencia (ca), Galicia (gl), and Navarre and the Basque Country (eu)). On the one hand, demand by many citizens in these territories make private companies increasingly interested in generating information (documentation for products and services, customer support, etc.) in languages different from Spanish. On the other hand, the various levels of government (national, autonomic, provincial, municipal) must respect, in the mentioned territories, the linguistic rights recognized to their citizens and promote the use of such languages. Machine translation is a key technology to meet these goals and demands.Existing MT programs for the es-ca and the es-gl pairs (there are no programs for the es-eu pair) are mostly commercial or use proprietary technologies, which makes them very hard to adapt to new usages, and use different technologies across language pairs, which makes it very difficult to integrate them in a single multilingual content management system.The MT architecture proposed here uses finite-state transducers for lexical processing, hidden Markov models for part-of-speech tagging, and finite-state based chunking for structural transfer, and is largely based upon that of systems already developed by the Transducens group such as interNOSTRUM 3 (Spanish-Catalan, Canals-Marote et al. 2001) and Traductor Universia 4 (Spanish-Portuguese, Garrido-Alenda et al. 2003) ; these systems are publicly accessible through the net and used on a daily basis by thousands of users.One of the main novelties of this architecture is that it will be released under an opensource license 5 (together with pilot linguistic data derived from other open-source projects such as Freeling (Carreras et al. 2004) or created specially for this purpose) and will be distributed free of charge. This means that anyone having the necessary computational and linguisthere will be two different licenses: one for the machine translation engine and tools, and another one for the linguistic data. tic skills will be able to adapt or enhance it to produce a new MT system, even for other pairs of related languages. The whole system will be released at the beginning of 2006. 6 We expect that the introduction of a unified open-source MT architecture will ease some of the mentioned problems (having different technologies for different pairs, closed-source architectures being hard to adapt to new uses, etc.). It will also help shift the current business model from a licence-centred one to a services-centred one, and favour the interchange of existing linguistic data through the use of the XML-based formats defined in this project.It has to be mentioned that this is the first time that the government of Spain funds a large project of this kind, although the adoption of open-source software by administrations in Spain is not new. 7 The following sections give an overview of the architecture (sec. 2), the formats defined for the encoding of linguistic data (sec. 3), and the compilers used to convert these data into an executable form (sec. 4); finally, we give some concluding remarks (sec. 5).
0
Compounds are extremely common in Icelandic, accounting for over 88% of all words in the Database of Icelandic Morphology (DIM) (Bjarnadóttir, 2017; Bjarnadóttir et al., 2019) . As compounding is so productive, new compounds frequently occur as out-of-vocabulary (OOV) words, which may adversely affect the performance of NLP tools. Furthermore, Icelandic is a morphologically rich language with a complex inflectional system. There are 16 inflectional categories (i.e., word forms with unique part-of-speech (PoS) tags) for nouns, for adjectives 120, and for verbs 122, excluding impersonal constructions. The average number of inflectional forms per headword in DIM is 21.7. Included in this average are all uninflected words as well inflectional variants, i.e., dual word forms with the same PoS tag. Compounds are formed by combining two words, which may be compounds themselves. The former word is known as a modifier and the second as a head, assuming binary branching (Bjarnadóttir, 2005) . Theoretically, there is no limit to how many constituents a compound can be composed of, although very long words such as uppáhaldseldhúsinnréttingaverslunin 'the favorite kitchen furniture store' (containing 7 constituent parts) are rare. The constituent structure of a compound word can be represented by a full binary tree, as shown in Figure 1 . Compound splitting, or decompounding, is the process of breaking compound words into their constituent parts. This can significantly reduce the number of OOV words for languages where compounding is productive. Compound splitting has been shown to be effective for a variety of tasks, such as machine translation (Brown, 2002; Koehn and Knight, 2003) , speech recognition (Adda-Decker and Adda, 2000) and information retrieval (Braschler et al., 2003) . In this paper, we present a character-based bidirectional long short-term memory (BiLSTM) model for splitting Icelandic compound words, and evaluate its performance for varying amounts of training data. Our model is trained on a dataset of 2.9 million unique word forms and their constituent structures from DIM. The model learns how to split compound words into two parts and can be used to derive the constituent structure of any word form. The model outperforms other previously published methods when evaluated on a corpus of manually split word forms. Our method has been integrated into Kvistur, an Icelandic compound word analyzer. Finally, preliminary experiments show that our model performs very well when evaluated on a closely related language, Faroese.
0
For Asian languages such as Japanese and Chinese that do not contain explicitly marked word boundaries, word segmentation is an important first step for many subsequent language processing tasks, such as POS tagging, parsing, semantic role labeling, and various applications. Previous studies for POS tagging and syntax parsing on these languages sometimes assume that gold standard word segmentation information is provided, which is not the real scenario. In a fully automatic system, a pipeline approach is often adopted, where raw sentences are first segmented into word sequences, then POS tagging and parsing are performed. This kind of approach suffers from error propagation. For example, word segmentation errors will result in tagging and parsing errors. Additionally, early modules cannot use information from subsequent modules. Intuitively a joint model that performs the three tasks together should help the system make the best decisions.In this paper, we propose a unified model for joint Chinese word segmentation, POS tagging, and parsing. Three sub-models are independently trained using the state-of-the-art methods. We do not use the joint inference algorithm for training because of the high complexity caused by the large amount of parameters. We use linear chain Conditional Random Fields (CRFs) (Lafferty et al., 2001) to train the word segmentation model and POS tagging model, and averaged perceptron (Collins, 2002) to learn the parsing model. During decoding, parameters of each sub-model are scaled to represent its importance in the joint model. Our decoding algorithm is an extension of CYK parsing. Initially, weights of all possible words together with their POS tags are calculated. When searching the parse tree, the word and POS tagging features are dynamically generated and the transition information of POS tagging is considered in the span merge operation.Experiments are conducted on Chinese Tree Bank (CTB) 5 dataset, which is widely used for Chinese word segmentation, POS tagging and parsing. We compare our proposed joint model with the pipeline system, both built using the state-of-the-art submodels. We also propose an evaluation metric to calculate the bracket scores for parsing in the face of word segmentation errors. Our experimental results show that the joint model significantly outperforms the pipeline method based on the state-of-the-art sub-models.
0
Traditional accounts of ambiguity have generally assumed that each use of a linguistic expression has a unique intended interpretation in context, and attempted to develop a model to determine it (Nakov and Hearst, 2005; Brill and Resnik, 1994) . However, disambiguation is not always appropriate or even desirable (Poesio and Artstein, 2008) . Ambiguous text may be interpreted differently by different readers, with no consensus about which reading is the intended one. Attempting to assign a preferred interpretation may therefore be inappropriate. Misunderstandings among readers do occur and may have undesir-able consequences. In requirements engineering processes, for example, this results in costly implementation errors (Boyd et al., 2005) .Nonetheless, most text does not lead to significant misinterpretation. Our research aims to establish a model that estimates how likely an ambiguity is to lead to misunderstandings. Our previous work on nocuous ambiguity (Chantree et al., 2006; Willis et al., 2008) cast ambiguity not as a property of a text, but as a property of text in relation to a set of stakeholders. We drew on human judgments -interpretations held by a group of readers of a text -to establish criteria for judging the presence of nocuous ambiguity. An ambiguity is innocuous if it is read in the same way by different people, and nocuous otherwise. The model was tested on co-ordination ambiguity only.In this paper, we implement, refine and extend the model. We investigate two typical ambiguity types arising from coordination and anaphora. We extend the previous work (Willis et al., 2008) with additional heuristics, and refine the concept of ambiguity threshold. We experiment with alternative machine learning algorithms to find optimal ways of combining the output of the heuristics. Yang et al. (2010a) describes a complete implementation in a prototype tool running on full text. Here we present our experimental results, to illustrate and evaluate the extended methodology.The rest of the paper is structured as follows. Section 2 introduces the methodology for automatic detection of nocuous ambiguity. Sections 3 and 4 provide details on how the model is applied to coordination and anaphora ambiguity. Experimental setup and results are reported in Section 5, and discussed in Section 6. Section 7 reports on related work. Conclusions and future work are found in Section 8.
0
Documents often appear within a network structure: social media mentions, retweets, and follower relationships; Web pages by hyperlinks; scientific papers by citations. Network structure interacts with the topics in the text, in that documents linked in a network are more likely to have similar topic distributions. For instance, a citation link between two papers suggests that they are about a similar field, and a mentioning link between two social media users often indicates common interests. Conversely, documents' similar topic distributions can suggest links between them. For example, topic model (Blei et al., 2003, LDA) and block detection papers (Holland et al., 1983) are relevant to our research, so we cite them. Similarly, if a social media user A finds another user B with shared interests, then A is more likely to follow B.Our approach is part of a natural progression of network modeling in which models integrate more information in more sophisticated ways. Some past methods only consider the network itself (Kim and Leskovec, 2012; Liben-Nowell and Kleinberg, 2007) , which loses the rich information in text. In other cases, methods take both links and text into account (Chaturvedi et al., 2012) , but they are modeled separately, not jointly, limiting the model's ability to capture interactions between the two. The relational topic model (Chang and Blei, 2010, RTM) goes further, jointly modeling topics and links, but it considers only pairwise document relationships, failing to capture network structure at the level of groups or blocks of documents.We propose a new joint model that makes fuller use of the rich link structure within a document network. Specifically, our model embeds the weighted stochastic block model (Aicher et al., 2014, WSBM) to identify blocks in which documents are densely connected. WSBM basically categorizes each item in a network probabilistically as belonging to one of L blocks, by reviewing its connections with each block. Our model can be viewed as a principled probabilistic extension of Yang et al. (2015) , who identify blocks in a document network deterministically as strongly connected components (SCC). Like them, we assign a distinct Dirichlet prior to each block to capture its topical commonalities. Jointly, a linear regression model with a discriminative, max-margin objective function (Zhu et al., 2012; Zhu et al., 2014) is trained to reconstruct the links, taking into account the features of documents' topic and word distributions (Nguyen et al., 2013) , block assignments, and inter-block link rates.We validate our approach on a scientific paper abstract dataset and collection of webpages, with citation links and hyperlinks respectively, to predict links among previously unseen documents and from those new documents to training documents. Embedding the WSBM in a network/topic model leads to substantial improvements in link prediction over previous models; it also improves block detection and topic interpretability. The key advantage in embedding WSBM is its flexibility and robustness in the face of noisy links. Our results also lend additional support for using maxmargin learning for a "downstream" supervised topic model (McAuliffe and Blei, 2008) , and that predictions from lexical as well as topic features improves performance (Nguyen et al., 2013) .The rest of this paper is organized as follows. Section 2 introduces two previous link-modeling methods, WSBM and RTM. Section 3 presents our methods to incorporate block priors in topic modeling and include various features in link prediction, as well as the aggregated discriminative topic model whose posterior inference is introduced in Section 4. In Section 5 we show how our model can improve link prediction and (often) improve topic coherence.
0
Natural language understanding (NLU) refers to the ability of a system to 'comprehend' the meaning (semantics) and the structure (syntax) of human language to enable the interaction with a system or device. Cross-lingual natural language understanding (XNLU) alludes to a system that is able to handle multiple languages simultaneously (Artetxe and Schwenk, 2019; Hu et al., 2020) . We focus on task-oriented XNLU that comprises two correlated objectives: i) Intent Classification, which identifies the type of user command, e.g. 'edit_reminder', 'send_message' or 'play_music' and ii) Entity/Slot Recognition, which identifies relevant entities in the utterance including their types such as dates, messages, music tracks, locations, etc. In a modular dialogue system, this information is used by the dialogue manager to decide how to respond to the user (Casanueva et al., 2017; . For neural XNLU systems, the limited availability of annotated data is a significant barrier to scaling dialogue systems to more users (Razumovskaia et al., 2021) . Therefore, we can use cross-lingual methods to zero-shot transfer the knowledge learnt in a high-resource language such as English to the target language of choice (Artetxe et al., 2020; . To this end, we introduce a variety of alignment methods for zero-shot cross-lingual transfer, most notably CrossAligner. Our methods leverage unlabelled parallel data and can be easily integrated on top of a pretrained language model, referred to as XLM 1 , such as XLM-RoBERTa (Conneau et al., 2020) . Our methods help the XLM align its cross-lingual representations while optimising the primary XNLU tasks, which are learned only in the source language and transferred zero-shot to the target language. Finally, we also investigate the effectiveness of simple and weighted combinations of multiple alignment losses, which leads to further model improvements and insights. Our contributions are summarised as follows:• We introduce CrossAligner, a cross-lingual transfer method that achieves SOTA performance on three benchmark XNLU datasets. • We introduce Translate-Intent, a simple and effective baseline, which outperforms its commonly used counterpart 'Translate-Train'. • We introduce Contrastive Alignment, an auxiliary loss that leverages contrastive learning at a much smaller scale than past work. • We introduce weighted combinations of the above losses to further improve SOTA scores. • Qualitative analysis aims to guide future research by examining the remaining errors.
0
In the past decade, new forms of communication, such as microblogging and text messaging have emerged and became ubiquitous. These short messages are often used to share opinions and sentiments. The Sentiment Analysis in Twitter task promotes research that will lead to a better understanding of how sentiment is conveyed in tweets and texts. In this paper, we describe our contribution at task 2 of Se-mEval 2013 (Wilson et al., 2013) . For the Contextual Polarity Disambiguation subtask, covered in section 2, we use a system that combines a lexicon based approach to sentiment detection with two types of supervised learning methods, one used for polarity shift identification and one for tweet segment classification in the absence of lexicon words. The third section presents the Message Polarity Classification subtask. We focus here on the influence of domain information on sentiment classification by detecting words that change their polarity across domains.
0
For speakers of a language whose nouns have no gender (such as modern English), making the leap to a language that does (such as German), does not come easy. With no or few rules or heuristics to guide him, the language learner will try to draw on the "obvious" parallel between grammatical and natural gender, and will be immediately baffled to learn that girl -Mädchen -is neuter in German. Furthermore, one may refer to the same object using words with different gender: car can be called (das) Auto (neuter) or (der) Wagen (masculine). Imagine that after hard work, the speaker has mastered gender in German, and now wishes to proceed with a Romance language, for example Italian or Spanish. He is now confronted with the task of relearning to assign gender in these new languages, made more complex by the fact that gender does not match across languages: e.g. sun -feminine in German (die Sonne), but masculine in Spanish (el sol), Italian (il sole) and French (le soleil); moon -masculine in German (der Mond), but feminine in Spanish (la luna), Italian (la luna) and French (la lune). Gender doesn't even match within a single language family: travel -masculine in Spanish (el viage) and Italian (il viaggio), but feminine in Portuguese (a viagem).Grammatical gender groups nouns in a language into distinct classes. There are languages whose nouns are grouped into more or less than three classes. English for example has none, and makes no distinction based on gender, although Old English did have three genders and some traces remain (e.g. blonde, blond).Linguists assume several sources for gender: (i) a first set of nouns which have natural gender and which have associated matching grammatical gender; (ii) nouns that resemble (somehow) the nouns in the first set, and acquire their grammatical gender through this resemblance. Italian and Romanian, for example, have strong and reliable phonological correlates (Vigliocco et al., 2004b, for Italian) . (Doca, 2000, for Romanian) . In Romanian the majority of feminine nouns end inȃ or e. Some rules exists for German as well (Schumann, 2006) , for example nouns ending in -tät, -ung, -e, -enz, -ur, -keit, - in tend to be feminine. Also, when specific morphological processes apply, there are rules that dictate the gender of the newly formed word. This process explains why Frau (woman) is feminine in German, while Fräulein (little woman, miss) is neuter -Fräulein = Frau + lein. The existing rules have exceptions, and there are numerous nouns in the language which are not derived, and such suffixes do not apply.Words are names used to refer to concepts. The fact that the same concept can be referred to using names that have different gender -as is the case for car in German -indicates that at least in some cases, grammatical gender is in the name and not the concept. We test this hypothesis -that the gender of a noun is in its word form, and that this goes beyond word endings -using noun gender data for German and Romanian. Both Romanian and German have 3 genders: masculine, feminine and neuter. The models built using machine learning algorithms classify test nouns into gender classes based on their form with high accuracy. These results support the hypothesis that in gendered languages, the word form is a strong clue for gender. This supplements the situation when some concepts have natural gender that matches their grammatical gender: it allows for an explanation where there is no such match, either directly perceived, or induced through literary devices.The present research has both theoretical and practical benefits. From a theoretical point of view, it contributes to research on phonology and gender, in particular in going a step further in understating the link between the two. From a practical perspective, such a connection between gender and sounds could be exploited in advertising, in particular in product naming, to build names that fit a product, and which are appealing to the desired customers. Studies have shown that especially in the absence of meaning, the form of a word can be used to generate specific associations and stimulate the imagination of prospective customers (Sells and Gonzales, 2003; Bedgley, 2002; Botton et al., 2002) .
0
In recent years, rich contextual embeddings such as ELMo (Peters et al., 2018) and BERT (Devlin et al., 2018) have enabled rapid progress on benchmarks like GLUE (Wang et al., 2019a) and have seen widespread industrial use (Pandu Nayak, 2019) . However, these methods require significant computational resources (memory, time) during pretraining, and during downstream task training and inference. Thus, an important research problem is to understand when these contextual embeddings add significant value vs. when it is possible to use more efficient representations without significant degradation in performance.As a first step, we empirically compare the performance of contextual embeddings with classic embeddings like word2vec (Mikolov et al., 2013) and GloVe (Pennington et al., 2014) . To further understand what performance gains are attributable to improved embeddings vs. the powerful downstream models that leverage them, we also compare with a simple baseline-fully random embed- * Equal contribution.dings-which encode no semantic or contextual information whatsoever. Surprisingly, we find that in highly optimized production tasks at a major technology company, both classic and random embeddings have competitive (or even slightly better!) performance than the contextual embeddings. 1, 2 To better understand these results, we study the properties of NLP tasks for which contextual embeddings give large gains relative to non-contextual embeddings. In particular, we study how the amount of training data, and the linguistic properties of the data, impact the relative performance of the embedding methods, with the intuition that contextual embeddings should give limited gains on data-rich, linguistically simple tasks.In our study on the impact of training set size, we find in experiments across a range of tasks that the performance of the non-contextual embeddings (GloVe, random) improves rapidly as we increase the amount of training data, often attaining within 5 to 10% accuracy of BERT embeddings when the full training set is used. This suggests that for many tasks these embeddings could likely match BERT given sufficient data, which is precisely what we observe in our experiments with industry-scale data. Given the computational overhead of contextual embeddings, this exposes important trade-offs between the computational resources required by the embeddings, the expense of labeling training data, and the accuracy of the downstream model.To better understand when contextual embeddings give large boosts in performance, we identify three linguistic properties of NLP tasks which help explain when these embeddings will provide gains:• Complexity of sentence structure: How interdependent are different words in a sentence?• Ambiguity in word usage: Are words likely to appear with multiple labels during training?• Prevalence of unseen words: How likely is encountering a word never seen during training?Intuitively, these properties distinguish between NLP tasks involving simple and formulaic text (e.g., assistant commands) vs. more unstructured and lexically diverse text (e.g., literary novels). We show on both sentiment analysis and NER tasks that contextual embeddings perform significantly better on more complex, ambiguous, and unseen language, according to proxies for these properties. Thus, contextual embeddings are likely to give large gains in performance on tasks with a high prevalence of this type of language.
0
Reasoning is an important part of human logical thinking. It gives us the ability to draw fresh conclusions from some of the known points (Judea, 1988) . Argument is the basis for reasoning. Except for the argument's claim and reason, usually, it needs some additional information. Therefore, what we know is the additional information and arguments reason. The claim also needs warrants for an explanation. An example is shown in Table 1 .Obviously, A is a reasonable explanation. The task is to get the reader to find a reasonable explanation for the known messages and claims in the two warrants. Due to the small number of alternative warrants, this problem can be considered to be a binary classification problem. This idea can be used as the baseline model. However, for system scalability and effectiveness, we treat this problem as the regression problem of probability prediction. The idea calculates the probability for each warrant that it is correct. Because of the diversity of natural language expression, there are many ways in which the same meaning can be expressed. Thus, this approach can be better to address this situation (Collobert et al., 2011) . Another benefit of addressing the problem in this way is to make the problem similar in form to the multi-choice question-answering system. The question-answering system is a classic problem of natural language processing. Many methods and models can be used for reference.The traditional question-answering system is based on semantic and statistical methods (Alfonseca et al., 2002) . This method requires an enormous background knowledge base. In addition, it is not very effective for nonstandard language expression. The state-of-the-art methods are usually based on neural networks. The trained word embedding can fully express the semantics and knowledge. Therefore, the new method is usually better than the traditional statistical-based method.In this paper, we proposed a bi-directional L-STM with an attention model. The model uses a bi-LSTM network to encode the original word embedding. Then, the semantic outputs are fed into the dense decoder with an attention mechanism. Due to the uncertainty of a single model, ensemble learning is used to enhance the performance of the model.The remainder of the paper consists of 3 parts. The second part introduces the proposed model in detail, and the implementation is presented in the third part, while the last part presents our conclusions.
0
The Internet has been surging in popularity as well as general availability. This has considerably increased the amount of user generated content present online. This has, however, brought up a few issues. One of the issues is hate speech detection, as manual detection has been made nearly impossible by the quantity of data. The only real solution is automated hate speech detection. Our task is detection of hate speech towards immigrants and women on Twitter (Task A).Hate speech can be defined as "Any communication that disparages a person or a group on the basis of some characteristic such as race, color, ethnicity, gender, sexual orientation, nationality, religion, or other characteristics." (Basile et al., 2019) This proves to be a very broad definition, because utterances can be offensive, yet not hateful (Davidson et al., 2017) . Even manual labeling of hate speech related data is notoriously difficult as hate speech is very subjective in nature (Nobata et al., 2016; Waseem, 2016) .The provided dataset consists of collected messages from Twitter in English or Spanish language. Hate speech datasets are very prone to class imbalances (Schmidt and Wiegand, 2017) . The pro-vided dataset does not suffer from this problem. The English data contains 10,000 messages with 42.1% of the messages labeled as hate speech. The Spanish data contains 4969 messages and similarly to the English part, 41.5% were labeled as hate speech. This gives us a dataset with 14969 messages of which 6270 are categorized as hatespeech. We have not used any additional sources of training data for our models. More information about the data can be found in the Task definition (Basile et al., 2019) .Most research dealing with hate speech has been done in English due to labelled dataset availability. However, this issue is not unique to English-based content. In our work, we explore multilingual approaches, as we recognize data imbalance between languages as one of major challenges of NLP. Multilingual approaches could help remedy this problem, as one could transfer knowledge from a data-rich language (English) to a datapoor language (Spanish).We focus on neural network approaches, as they have been achieving better performance than traditional machine learning algorithms (Zhang et al., 2018) . We explore both monolingual and multilingual learning paradigms. Multilingual approaches enable us to use both English and Spanish datasets for training.The most popular input features in deep learning are word embeddings.Embeddings are fixed length vectors with real numbers as components, used to represent words in a numeric way.The input layers to our models consist of MUSE (Conneau et al., 2017) or ELMo (Peters et al., 2018) word embeddings.MUSE embeddings are multilingual embeddings based on fastText. They are available in different languages, where the words are mapped into the same vector space across languages, i.e. words with similar meanings across languages have a similar vector representation.ELMo provide a deep representation of words based on output of a three layer pre-trained neural network. The representation for a word is based on the context in which the word is used. However, they are not multilingual representations.To work around the monolinguality of ELMo, we use a technique called adversarial learning (Ganin and Lempitsky, 2014) . Adversarial networks consist of three parts:• Feature extractor responsible for creating representations belonging to the same distribution regardless of input data distribution i.e. of the language the messages are in. This transformation is learned during training.• Classifier responsible for the classification i.e. labeling hateful utterances.• Discriminator responsible for predicting the language of a given message.During backpropagation, the loss from classifier (L cls ) is computed the standard way. The loss from discriminator (L dis ) has its sign flipped and is multiplied by adversarial lambda (λ). The discriminator works adversarialy to the classificator.Loss = L cls − λL dis (1)The loss from the discriminator encourages the feature extractor to create indistinguishable representations for messages across languages. This is most often implemented by a gradient reversal layer.2 Implementation details
0
The importance of automatic methods for enriching lexicons, taxonomies and knowledge bases from free text is well-recognized. For rapidly changing domains such as current affairs, static knowledge bases are inadequate for responding to new developments, and the cost of building and maintaining resources by hand is prohibitive.This paper describes experiments which develop automatic methods for taking an original taxonomy as a skeleton and fleshing it out with new terms which are discovered in free text. The method is completely automatic and it is completely unsupervised apart from using the original taxonomic skeleton to suggest possible classifications for new terms. We evaluate how accurately our methods can reconstruct the WordNet taxonomy (Fellbaum, 1998) .The problem of enriching the lexical information in a taxonomy can be posed in two complementary ways.Firstly, given a particular taxonomic class (such as fruit) one could seek members of this class (such as apple, banana) . This problem is addressed by Riloff and Shepherd (1997) , Roark and Charniak (1998) and more recently by . Secondly, given a particular word (such as apple), one could seek suitable taxonomic classes for describing this object (such as fruit, foodstuff). The work in this paper addresses the second of these questions.The goal of automatically placing new words into a taxonomy has been attempted in various ways for at least ten years (Hearst and Schütze, 1993) . The process for placing a word w in a taxonomy T using a corpus C often contains some version of the following stages:• For a word w, find words from the corpus C whose occurrences are similar to those of w. Consider these the 'corpus-derived neighbors' N (w) of w.• Assuming that at least some of these neighbors are already in the taxonomy T , map w to the place in the taxonomy where these neighbors are most concentrated. Hearst and Schütze (1993) added 27 words to Word-Net using a version of this process, with a 63% accuracy at assigning new words to one of a number of disjoint WordNet 'classes' produced by a previous algorithm. (Direct comparison with this result is problematic since the number of classes used is not stated.) A more recent example is the top-down algorithm of Alfonseca and Manandhar (2001) , which seeks the node in T which shares the most collocational properties with the word w, adding 42 concepts taken from The Lord of the Rings with an accuracy of 28%.The algorithm as presented above leaves many degrees of freedom and open questions. What methods should be used to obtain the corpus-derived neighbors N (w)? This question is addressed in Section 2. Given a collection of neighbors, how should we define a "place in the taxonomy where these neighbors are most concentrated?" This question is addressed in Section 3, which defines a robust class-labelling algorithm for mapping a list of words into a taxonomy. In Section 4 we describe experiments, determining the accuracy with which these methods can be used to reconstruct the WordNet taxonomy. To our knowledge, this is the first such evaluation for a large sample of words. Section 5 discusses related work and other problems to which these techniques can be adapted.2 Finding semantic neighbors: Combining latent semantic analysis with part-of-speech information.There are many empirical techniques for recognizing when words are similar in meaning, rooted in the idea that "you shall know a word by the company it keeps" (Firth, 1957) . It is certainly the case that words which repeatedly occur with similar companions often have related meanings, and common features used for determining this similarity include shared collocations (Lin, 1999) , co-occurrence in lists of objects and latent semantic analysis (Landauer and Dumais, 1997; Hearst and Schütze, 1993) .The method used to obtain semantic neighbors in our experiments was a version of latent semantic analysis, descended from that used by Hearst and Schütze (1993, §4) . First, 1000 frequent words were chosen as column labels (after removing stopwords (Baeza-Yates and Ribiero-Neto, 1999, p. 167) ). Other words were assigned co-ordinates determined by the number of times they occured within the same context-window (15 words) as one of the 1000 column-label words in a large corpus. This gave a matrix where every word is represented by a rowvector determined by its co-occurence with frequently occuring, meaningful words. Since this matrix was very sparse, singular value decomposition (known in this context as latent semantic analysis (Landauer and Dumais, 1997) ) was used to reduce the number of dimensions from 1000 to 100. This reduced vector space is called WordSpace (Hearst and Schütze, 1993, §4) . Similarity between words was then computed using the cosine similarity measure (Baeza-Yates and Ribiero-Neto, 1999, p. 28) . Such techniques for measuring similarity between words have been shown to capture semantic properties: for example, they have been used successfully for recognizing synonymy (Landauer and Dumais, 1997) and for finding correct translations of individual terms . The corpus used for these experiments was the British National Corpus, which is tagged for parts-of-speech. This enabled us to build syntactic distinctions into WordSpace -instead of just giving a vector for the string test we were able to build separate vectors for the nouns, verbs and adjectives test. An example of the contribu-tion of part-of-speech information to extracting semantic neighbors of the word fire is shown in Table 2 . As can be seen, the noun fire (as in the substance/element) and the verb fire (mainly used to mean firing some sort of weapon) are related to quite different areas of meaning. Building a single vector for the string fire confuses this distinction -the neighbors of fire treated just as a string include words related to both the meaning of fire as a noun (more frequent in the BNC) and as a verb.Part of the goal of our experiments was to investigate the contribution that this part-of-speech information made for mapping words into taxonomies. As far as we are aware, these experiments are the first to investigate the combination of latent semantic indexing with part-ofspeech information.
0
Formal grammar used in statistical machine translation (SMT), such as Bracketing Transduction Grammar (BTG) proposed by (Wu, 1997) and the synchronous CFG presented by (Chiang, 2005) , provides a natural platform for integrating linguistic knowledge into SMT because hierarchical structures produced by the formal grammar resemble linguistic structures. 1 Chiang (2005) attempts to integrate linguistic information into his formally c 2008.Licensed under the Creative Commons Attribution-Noncommercial-Share Alike 3.0 Unported license (http://creativecommons.org/licenses/by-nc-sa/3.0/). Some rights reserved. 1 We inherit the definitions of formal and linguistic from (Chiang, 2005) which makes a distinction between formally syntax-based SMT and linguistically syntax-based SMT. syntax-based system by adding a constituent feature. Unfortunately, the linguistic feature does not show significant improvement on the test set. In this paper, we further this effort by integrating linguistic knowledge into BTG.We want to augment BTG's formal structures with linguistic structures since they are both hierarchical. In particular, our goal is to learn a more linguistically meaningful BTG from real-world bitexts by projecting linguistic structures onto BTG formal structures. In doing so, we hope to (1) maintain the strength of phrase-based approach since phrases are still used on BTG leaf nodes; (2) obtain a tight integration of linguistic knowledge in the translation model; (3) and finally avoid inducing a complicated linguistic synchronous grammar with expensive computation. The challenge, of course, is that BTG hierarchical structures are not always aligned with the linguistic structures in the syntactic parse trees of source or target language.Along this line, we propose a novel approach: Linguistically Annotated BTG (LABTG) for SMT. The LABTG annotates BTG rules with linguistic elements that are learned from syntactic parse trees on the source side through an annotation algorithm, which is capable of labelling both syntactic and non-syntactic phrases. The linguistic elements extracted from parse trees capture both internal lexical content and external context of phrases. With these linguistic annotations, we expect the LABTG to address two traditional issues of standard phrase-based SMT (Koehn et al., 2003) in a more effective manner. They are (1) phrase translation: translating phrases according to their contexts; (2) phrase reordering: incorporating richer linguistic features for better reordering.The proposed LABTG displays two unique characteristics when compared with BTG-based SMT (Wu, 1996; Xiong et al., 2006) . The first is that two linguistically-informed sub-models are introduced for better phrase translation and reordering: annotated phrase translation model and annotated reordering model. The second is that our proposed annotation algorithm and scheme are capable of conveying linguistic knowledge from source-side syntax structures to BTG structures. We describe the LABTG model and the annotation algorithm in Section 4. To better explain the LABTG model, we establish a unified framework of BTG-based SMT in Section 3. We conduct a series of experiments to study the effect of the LABTG in Section 5.
0
The neural approach is revolutionising machine translation (MT). The main neural approach to MT is based on the encoder-decoder architecture (Cho et al., 2014; Sutskever et al., 2014) , where an encoder (e.g a recurrent neural network) reads the source sentences sequentially to produce a fixed-length vector representation. Then, a decoder generates the translation from the encoded vector, which can dynamically change using the attention mechanism.One of the main premises about natural language is that words of a sentence are inter-related according to a (latent) hierarchical structure, i.e. a syntactic tree. Therefore, it is expected that modeling the syntactic structure should improve the performance of NMT, especially in low-resource or linguistically divergent scenarios, such as English-Farsi. In this direction, (Li et al., 2017 ) uses a sequence-to-sequence model, making use of linearised parse trees. (Chen et al., 2017b ) has proposed a model which uses syntax to constrain the dynamic encoding of the source sentence via structurally constrained attention. (Bastings et al., 2017; Shuangzhi Wu, 2017) have incorporated syntactic information provided by the dependency tree of the source sentence. (Marcheggiani et al., 2018 ) has proposed a model to inject semantic bias into the encoder of NMT model. Recently, (Eriguchi et al., 2016; Chen et al., 2017a) have proposed methods to incorporate the hierarchical syntactic constituency information of the source sentence. In addition to the embedding of words, computed using the vanilla sequential encoder, they compute the embeddings of phrases recursively, directed by the top-1 parse tree of the source sentence generated by a parser. Though the results are promising, the top-1 trees are prone to parser error, and furthermore cannot capture semantic ambiguities of the source sentence.In this paper, we address the aforementioned issues by using exponentially many trees encoded in a forest instead of a single top-1 parse tree. We capture the parser uncertainty by considering many parse trees and their probabilities. The encoding of each source sentence is guided by the forest, and includes the forest nodes whose representations are computed in a bottom-up fashion using our ForestLSTM architecture ( §3). Thus, in the encoding stage of this approach, different ways of constructing a phrase are taken into consideration along with the probability of rules in the corresponding trees. We evaluate our approach on English to Chinese, Farsi and German translation tasks, showing that forests lead to better performance compared to top-1 tree and sequential encoders ( §4).
0
Natural language processing (NLP) tasks often leverage word-level features to exploit lexical knowledge. Segmenting a sentence into a sequence of words, especially for languages without explicit word boundaries (e.g., Chinese) not only extracts lexical features, but also shortens the length of the sentence to be processed. Thus, word segmentation, detecting word boundaries, is a crucial pre-processing task for many NLP tasks. In this aspect, Chinese word segmentation (CWS) is widely acknowledged as an essential task for Chinese NLP.CWS has made substantial progress in recent studies on several benchmarks, which is reported by Huang and Zhao (2007) and Zhao et al. (2019) . In particular, pretrained language models (PLMs), like BERT (Devlin et al., 2019) , have established new state-of-the-art in sequence labeling (Meng et al., 2019) . Various fine-tuning methods have been proposed to improve the performance of indomain and cross-domain CWS based on PLMs Tian et al., 2020) . The two challenging problems in CWS, segmentation ambiguity and out-of-vocabulary (OOV) words, have been significantly mitigated by PLM-based methods that are fine-tuned on large-scale annotated CWS corpora. Such methods are even reaching human performance on benchmarks. Nevertheless, CWS is more valuable as a prelude for downstream NLP tasks than as a standalone task. Intrinsic evaluation of CWS on benchmark datasets only examines the effectiveness of current neural methods on word boundary detection. To better apply CWS in downstream NLP tasks, we should comprehensively re-think CWS from the perspective of practicability. In this paper, we define the practicability of CWS with two aspects: low complexity as a standalone task and high beneficiality to downstream tasks.The complexity is twofold: 1) complexity of implementation and 2) time and space complexity of a CWS algorithm. Previous neural methods usually require additional resources (Zhou et al., 2017; Ma et al., 2018; Zhang et al., 2018b; Zhao et al., 2018; Qiu et al., 2020) , such as external pre-trained embeddings. The complexity of implementation is reflected in the difficulty of acquiring external resources. External resources vary in quality and the length of time for computation, For example, it is time-consuming to obtain effective pre-trained embeddings as they are trained on a huge amount of data. Generally, it is difficult to maintain high CWS performance for many previous neural methods in a low-resource environment. Neural methods with external resources achieve high CWS performance but at the cost of a high complexity of implementation. On the other hand, for training and inference, PLM-based CWS methods also consume large memory to store a huge number of parameters of their models. The speed of inference is usually slow. The huge memory consumption and slow inference prevent PLM-based CWS models from being deployed in small-scale smart devices. And, as CWS is often used with downstream models, this even weakens the applicability on smart devices as CWS is not supposed to take too much overhead in this situation.The second is the beneficiality to downstream tasks. CWS is rarely used as a standalone task in industry. Existing CWS evaluations only rely on benchmarks and analyze the behavior of segmentation methods in a static scenario. Some well-known benchmarks are quite old (e.g., Bakeoff-2005) and not challenging for neural CWS anymore. Such evaluations are intrinsic, which are not associated with downstream NLP tasks. High CWS performance (e.g., Precision and F 1 ) does not mean that segmentation results are beneficial to downstream processing. Additionally, benchmark datasets have a plenty of segmentation noises that affect CWS training and evaluation. For instance, although the structure of "副" (vice) + "X" is segmented as two words: "副" (vice) and "X" in training data and never unified as a single word, "副校长" (vicepresident) appears as one word in test data, note that: X presents any job titles, e.g., "总统" (president) and "经理" (manager). There are also many obvious errors due to annotation inconsistency in data. We have found, in one benchmark dataset, the word "操作系统" (operating system) is regarded as two words ["操作" (operate) + "系统" (system)] 6 times and appears as one word 14 times, respectively. Therefore, to measure and improve the beneficiality of CWS to downstream tasks, intrinsic evaluations on CWS benchmark datasets are not sufficient. We should perform extrinsic evaluations with downstream tasks.To address the aforementioned practicability issue of CWS, we propose a semi-supervised neu-ral method via pseudo labels. The method consists of two parts: a teacher model and a student model. First, we use a fine-tuned CWS model that is trained on the annotated CWS data as the teacher model, which can achieve competitive performance in traditional perspective for CWS. Then we collect massive unlabeled data and distill knowledge from the teacher model to the student model by generating pseudo labels. We filter out noisy pseudo labels to provide reliable knowledge for training the student model. The unlabeled data is easier to obtain than other external resources (e.g., lexicon and pretrained embeddings) and can be updated anytime at a low cost. And we use the lightweight student model for inference, hence significantly reducing the memory consumption and inference time complexity. The practicability of our proposed method is competitive.To sum up, the contributions of this work are as follows:• Our proposed method distills knowledge from the teacher model via unlabeled data to coach the lightweight student model. The proposed method achieves a noticeable improvement over strong baselines for CWS by the traditional intrinsic evaluation.• The lightweight student can be deployed on a small-scale device, even in a non-GPU environment. We abandon the PLM neural architectures (teacher model) during decoding. The speed of decoding is thus fast for practical application. Our method reduces the complexity of implementation, inference time, and memory consumption.• We empirically investigate the effectiveness of the proposed method to downstream Chinese NLP tasks and analyze the impact of segmentation results on them via extrinsic evaluations.
0
Scholars of Natural Language Processing technology rely on access to gold standard annotated data for training and evaluation of learning algorithms. Despite successful attempts to create machine readable document formats such as XML and HTML, the Portable Document Format (PDF) is still widely used for read-only documents which require visual markup, across domains such as scientific publishing, law, and government. This presents a challenge to NLP practitioners, as the PDF format does not contain exhaustive markup information, making it difficult to extract semantically meaningful regions from a PDF. Annotating text extracted from PDFs in a plaintext format is difficult, because the extracted text stream lacks any organization or markup, such as paragraph boundaries, figure placement and page headers/footers.Existing popular annotation tools such as BRAT (Stenetorp et al., 2012) focus on annotation of user provided plain text in a web browser specifically designed for annotation only. For many labeling tasks, this format is exactly what is required. However, as the scope and ability of natural language processing technology goes beyond purely textual processing due in part to recent advances in large language models (Peters et al., 2018; Devlin et al., 2019, inter alia) , the context and media in which datasets are created must evolve as well.In addition, the quality of both data collection and evaluation methodology is highly dependent on the particular annotation/evaluation context in which the data being annotated is viewed (Joseph et al., 2017; Läubli et al., 2018) . Annotating data directly on top of a HTML overlay on an underlying PDF canvas allows naturally occurring text to be annotated in its original context -that of the PDF itself.To address the need for an annotation tool that goes beyond plaintext data, we present a new annotation tool called PAWLS (PDF Annotation With Labels and Structure). In this paper, we discuss some of the PDF-specific design choices in PAWLS, including automatic bounding box uniformity, freeform annotations for non-textual image regions and scale/dimension agnostic bounding box storage. We report agreement statistics from an initial round of labelling during the creation of a PDF structure parsing dataset for which PAWLS was originally designed.
0
Cette dernière décennie a vu l'émergence de l'oral comme objet de recherche à part entière aussi bien dans les descriptions linguistiques qu'en TALN. Nous allons nous intéresser ici au français parlé, plus particulièrement au "spontané", dans une perspective d'annotation manuelle des relations syntaxiques à partir de transcriptions humaines assistées par ordinateur (logiciel Transcriber). Les conventions de transcription sont celles de (DELIC, à paraître). Nous comparerons l'oral avec le français écrit non standard. (cf. Habert et al., 1997 ; Abeillé et al., 2001) . Or, la constitution de tels corpus constitue un enjeu majeur, à la fois pour la communauté des linguistes (comparaison oral/écrit de certaines structures, extraction automatique de concordances plus précises…) et pour la communauté des chercheurs en TALN (entraînement des parseurs sur l'oral, dialogue homme-machine…).Dans le cadre de la campagne d'évaluation EASY (Evaluation des Analyseurs SYntaxique) du projet Technolangue EVALDA, nous avons été amenés à nous interroger sur les problèmes posés par l'annotation syntaxique de corpus oraux afin de savoir si cette tâche représentait un problème spécifique, compte tenu du fait que les analyseurs seront évalués sur des corpus écrits et oraux authentiques. Nous présentons dans cet article une typologie retraçant une partie des problèmes rencontrés à l'oral. Nous montrerons que l'étude de l'oral "spontané" permet d'aborder la question du traitement des "Nouvelles Formes de Communication Ecrite" (NFCE) (e-mails, forums, chats, SMS…), écrits plus ou moins normés dont le Web et la téléphonie mobile constituent une demande colossale notamment en terme de filtrage et d'analyse de contenus.La réflexion autour des outils, du formalisme et du standard d'annotation liés aux diverses sorties des analyseurs syntaxiques a été largement abordée dans une perspective d'évaluation des analyseurs syntaxiques (cf. Caroll et al., 2003 , la conférence associée à TALN 2003 "Evaluation des analyseurs syntaxiques"). En revanche, la question concernant le choix des annotations de référence est beaucoup moins débattue (cf. Aït-Mokhtar et al., 2003, pour l'écrit) et dépasse largement le seul problème d'évaluation. C'est cette question que nous allons aborder ici.Nous avons trois pré-requis méthodologiques : une analyse superficielle liée à l'orientation contemporaine vers des analyseurs syntaxiques robustes ; des structures syntaxiques en dépendances ; et la conservation de l'intégralité de l'information transcrite (amorces, répétitions, reformulations...) liée à la possibilité d'une identification ultérieure plus fine des intentions des locuteurs (Antoine et al., 2003 : 29) et au fait que, selon nous, l'analyse syntaxique commence avec la transcription fidèle des paroles, toute suppression constituant déjà une analyse syntaxique en soi (cf. Blanche-Benveniste, Jeanjean, 1986).
0
Social scientists rely on event data to quantitatively study the behavior of political actors. Public protest (demonstrations, industrial strikes, petition campaigns, political and symbolic violence) accounts for a large part of events involving sub-state actors. Protest event data are central to the study of protest mobilization, political instability, and social movements (Hutter, 2014; Koopmans and Rucht, 2002) .To advance the machine coding 1 of protest data, we have been building a manually annotated corpus of protest events. Our protest event coding follows guidelines adapted from successful manual coding projects. All coding decisions are supported by careful token-level annotation inspired by annotation standards for event extraction. Both event cod-ing and token-level annotation are performed by domain experts. We find that domain experts without specialist linguistic knowledge can be trained well to follow token-level annotation rules and deliver sufficient annotation quality.Contentious politics scholars often need more fine-grained information on protest events than can be delivered by available event coding software. Our event schema includes issues-the claims and grievances of protest actors-and the number of protesters. We also code protest events that are not the main topic of the report. This is often desirable (Kriesi et al., 1995) , although event coding systems would not always code them by design.We code newswire reports from the widely used English Gigaword corpus and will release all annotations. 2
0
Currently, many off-the-shelf named entity recognition solutions are available, and these can be used to recognize mentions in clinical notes denoting diseases and disorders. We decided to use the Stanford NER tool (Finkel et al., 2005) to train CRF models based on annotated biomedical text.The use of unsupervised methods for inferring word representations is nowadays also known to increase the accuracy of entity recognition models (Turian et al., 2010) . Thus, we also used Brown clusters (Brown et al., 1992; Turian et al., 2009) inferred from a large collection of non-annotated clinical texts, together with domain specific lexicons, to build features for our CRF models.An important challenge in entity recognition relates to the recognition of overlapping and noncontinuous entities (Alex et al., 2007) . In this paper, we describe how we modified the Stanford NER system to be able to recognize noncontinuous entities, through an adapted version of the SBIEO scheme .Besides the recognition of medical concepts, we also present the strategy used to map each of the recognized concepts into a SNOMED CT identifier (Cornet and de Keizer, 2008) . This task is particularly challenging, since there are many ambiguous cases. We describe our general approach to address the aforementioned CUI mapping problem, based on similarity search and on the information content of SNOMED CT concept names.
0
State-of-the-art statistical machine translation (SMT) systems use large amounts of parallel data to estimate translation models. However, parallel corpora are expensive and not available for every domain.Recently different works have been published that train translation models using only nonparallel data. Although first practical applications of these approaches have been shown, the overall decipherment accuracy of the proposed algorithms is still low. Improving the core decipherment algorithms is an important step for making decipherment techniques useful for practical applications.In this paper we present an effective beam search algorithm which provides high decipherment accuracies while having low computational requirements. The proposed approach allows using high order n-gram language models, is scalable to large vocabulary sizes and can be adjusted to account for a given amount of computational resources. We show significant improvements in decipherment accuracy in a variety of experiments while being computationally more effective than previous published works.
0
Community-based Question Answering (CQA) systems such as Yahoo! Answers 1 , StackOverflow 2 and Baidu Zhidao 3 have become dependable sources of knowledge to solve common user problems. Unlike factoid question answering 4 , CQA systems focus on crowdsourcing how and why questions and their answers. As is the case with any system where content is generated by web users, the generated content would be of varying quality, reliability, readability and abstraction. Thus, manual curation of such datasets is inevitable to weed out low quality and duplicate content to ensure user satisfaction. A natural way to aid manual curation of such broad-based CQA archives is to employ clustering so that semantically related QAs are grouped together; this would help organize the corpus in a way that experts engaged in manual curation be assigned specific clusters relating to areas of their expertise. Clustering also provides a platform to enable tagging the QA dataset; cluster topics could be used as tags, or other QAs in the same cluster could be tagged as being related to a QA. The fundamental difference between CQA archives and general text document collections is the existence of a two-part structure in QAs and the difference in lexical "character" between the question and answer parts. This lexical chasm (i.e., gap) (Berger et al., 2000) between question and answer parts has been a subject of much study, especially, in the context of improving QA retrieval. In this paper, we consider using the two-part structure in QAs for clustering CQA datasets.Motivating Example: Table 1 lists four example QAs from the context of a CQA system focused on addressing myriad technical issues. These QAs have been tagged in the table with a manually identified root-cause to aid understanding; the root-cause is not part of the CQA data per se. QA1 and QA2 are seen to address related issues pertaining to routers, whereas QA3 and QA4 are focused on the same nar-row issue dealing with java libraries. Since QA1 and QA2 address different problems, they may not be expected to be part of the same cluster in finegrained clusterings. On the other hand, the solutions suggested in QA3 and QA4 are distinct and different legitimate solutions to the same problem cause. Thus, from a semantics perspective, it is intuitive that QA3 and QA4 should be part of the same cluster in any clustering of the CQA dataset to aid actioning on them together; a human expert might decide to merge the question parts and tag one of the answers as an alternative answer. Let us now examine the lexical relatedness between the pairs as illustrated in Table 2 . State-of-the-art text similarity measures that quantify word overlaps are likely to judge QA1 and QA2 to be having a medium similarity when either the question-part or the answerpart are considered. For the pair (QA3, QA4), the question-part similarity would be judged to be high and the answer-part similarity as low. Thus, the high similarity between the root-causes of QA3 and QA4 manifest primarily in their question-parts. Analogously, we observed that some QAs involving the same root-cause lead to high answer-part similarity despite poor question-part similarity. This is especially true in cases involving suggestion of the same sequence of solution steps despite the question-part being divergent due to focusing on different symptoms of the same complex problem. From these observations, we posit that high similarities on either the question-space or answer-space is indicative of semantic relatedness. Any clustering method that uses a sum, average or weighted sum aggregation function to arrive at pair-wise similarities, such as a K-Means clustering that treats the collated QA as a single document, would intuitively be unable to heed to such differential manifestation of semantic similarities across the two parts. Our Contributions: We address the problem of harnessing the two-part structure in QA pairs to improve clustering of CQA data. Based on our observations on CQA data such as those illustrated in the example, we propose a clustering approach, MixK-Means, that composes similarities (dissimilarities) in the question and answer spaces using a max (min) operator style aggregation. Through abundant empirical analysis on real-world CQA data, we illustrate that our method outperforms the state-of-the-art approaches for the task of CQA clustering.
0
Linguistic alignment is the tendency that interlocutors have to change the way they talk to accommodate their conversational partners. This can happen through mirroring the partner's linguistic behavior on many levels such as the choice of words, syntactic structures, and semantic topics. Linguistic alignment is considered an important mechanism for establishing common ground and rapport, fostering successful communicative interactions (Clark, 1996) . In addition, understanding this coordination in its natural context is crucial for the design of conversational systems that interact with people in a natural and effective fashion (Zhao et al., 2016; Loth et al., 2015; Park et al., 2017) .While alignment has been largely studied with adults (Pickering and Garrod, 2004; Fusaroli et al., 2012; Dale et al., 2013; Dideriksen et al., 2019) , little has been done to investigate how it manifests in the context of childadult early communication and how it evolves across development. This is a significant gap in the literature. The child-adult early communication cannot be thought of as a simple extension of conversational dynamics between adults; it involves strong asymmetries in terms of cognitive abilities and social roles and, thus, requires more dedicated research (Clark, 2015) . In addition, the study of child-caregiver linguistic interaction informs our theories of children's cognitive development. On the one hand, children's developing abilities in managing a conversation -through mechanisms such as interactive alignment -is a window into their emerging social-cognitive skills (Tomasello, 2009) . On the other hand, the way caregivers use alignment across development allows us to understand whether and how adults tune their talk to children's developing cognitive abilities. Such tuning has been suggested to play a pedagogical role, supporting linguistic and conceptual learning (Snow, 1972; Fourtassi et al., 2014 Fourtassi et al., , 2019 .Our study investigates children's interactive alignment in natural conversations with adults. Previously, Dale and Spivey (2006) used recurrence analysis to investigate child-caregiver syntactic alignment (operationalized as sequences of parts of speech) and found evidence for syntactic coordination. Using a similar computational framework, Fernández and Grimm (2014) extended Dale and Spivey's findings to the lexical and conceptual levels. Nevertheless, both studies were based on data from three children only. While such a small sample size allows for a detailed examination of development for specific children, it does not allow us to characterize general developmental patterns that could be shared by the majority of children. Indeed, both studies found large individual variability and, thus, no strong conclusions about development could be drawn.In a more recent work, Yurovsky et al. (2016) studied a large-scale corpus of child-caregiver interactions containing two orders of magnitude more children than previous work. Using hierarchical Bayesian models, they found that both children and caregivers decreased their alignment over the first five years of development. Work by Yurovsky et al. (2016) thus provided a much more robust test of interactive alignment. However, it focused on the special case of function words. It is still an open question how development unfolds across the entire lexicon and along more abstract levels such as syntax and semantics. The current study is an effort to fill this gap in the literature. We leverage NLP tools to test interactive alignment at the lexical, syntactic, and conceptual levels, using a large-scale corpus of children's natural language.
0
Our aim here was to build thematic timelines for a general domain topic defined by a user query. This task, which involves the extraction of important events, is related to the tasks of Retrospective Event Detection (Yang et al., 1998) , or New Event Detection, as defined for example in Topic Detection and Tracking (TDT) campaigns (Allan, 2002) .The majority of systems designed to tackle this task make use of textual information in a bag-ofwords manner. They use little temporal information, generally only using document metadata, such as the document creation time (DCT). The few systems that do make use of temporal information (such as the now discontinued Google timeline), only extract absolute, full dates (that feature a day, month and year). In our corpus, described in Section 3.1, we found that only 7% of extracted temporal expressions are absolute dates.We distinguish our work from that of previous researchers in that we have focused primarily on extracted temporal information as opposed to other textual content. We show that using linguistic temporal processing helps extract important events in texts. Our system extracts a maximum of temporal information and uses only this information to detect salient dates for the construction of event timelines. Other types of content are used for initial thematic document retrieval. Output is a list of dates, ranked from most important to least important with respect to the given topic. Each date is presented with a set of relevant sentences.We can see this work as a new, easily evaluable task of "date extraction", which is an important component of timeline summarization.In what follows, we first review some of the related work in Section 2. Section 3 presents the resources used and gives an overview of the system. The system used for temporal analysis is described in Section 4, and the strategy used for indexing and finding salient dates, as well as the results obtained, are given in Section 5 1 .
0
Over the past few years, an increasing number of people have begun to express their opinion through social networks and microblogging services. Twitter, as one of the most popular of these social networks, has become a major platform for social communication, allowing its users to send and read short messages called 'tweets'. Tweets have become important in a variety of tasks, including the prediction of election results (O'Connor et al., 2010) . The emergence of online expressions of opinion has attracted interest in sentiment analysis of tweets in both academia and industry. Sentiment analysis, also known as opinion mining, focuses on computational treatments of sentiments (emotions, attitudes, opinions) in natural language text. In this paper we describe our submission to Task 10, subtask B: Message Polarity Classification. The task is defined as: 'Given a message, classify whether the message is of positive, negative, or neutral sentiment. For a message conveying both a positive and negative sentiment, whichever is the stronger sentiment should be chosen' (Rosenthal et al., 2015) .This paper describes a system which utilizes a Naive Bayes classifier to determine the sentiment of tweets. This paper describes the resources used, the system details, including preprocessing steps taken, feature extraction and classifier implemented, and the test runs and end results.
0
The sheer amount of natural language data provides a great opportunity to represent named entity mentions by their probability distributions, so that they can be exploited for many Natural Language Processing (NLP) applications. However, named entity mentions are fundamentally different from common words or phrases in three aspects. First, the semantic meaning of a named entity mention (e.g., a person name "Bill Gates") is not a simple summation of the meanings of the words it contains ("Bill" + "Gates"). Second, entity mentions are often highly ambiguous in various local contexts. For example, "Michael Jordan" may refer to the basketball player or the computer science professor. Third, representing entity mentions as mere phrases fails when names are rendered quite differently, especially when they appear across multiple languages. For example, "Ang Lee" in English is "Li An" in Chinese.Fortunately, entities, the objects which mentions refer to, are unique and equivalent across languages. Many manually constructed entity-centric knowledge base resources such as Wikipedia 2 , DBPedia (Auer et al., 2007) and YAGO (Suchanek et al., 2007) are widely available. Even better, they are massively multilingual. For example, up to August 2018, Wikipedia contains 21 million interlanguage links 3 between 302 languages. We propose a novel cross-lingual joint entity and word (CLEW) embedding learning framework based on multilingual Wikipedia and evaluate its effectiveness on two practical NLP applications: Crosslingual Entity Linking and Parallel Sentence Mining.Wikipedia contains rich entity anchor links. As shown in Figure 2 , many mentions (e.g., "小米" (Xiaomi)) in a source language are linked to the entities in the same language that they refer to (e.g., zh/小 米 科 技 (Xiaomi Technology)), and some mentions are further linked to their corresponding English entities (e.g., Chinese mention "苹果" (Apple) is linked to entity en/Apple_Inc. in English). We replace each mention (anchor link) in the source language with its corresponding entity title in the target language if it exists, or in the source language otherwise. After this replacement, each entity mention is treated as a unique disambiguated entity, then we can learn joint entity and word embedding representations for the source language and target language respectively.Furthermore, we leverage these shared target language entities as pivots to learn a rotation matrix and seamlessly align two embedding spaces into one by linear mapping. In this unified common space, multiple mentions are reliably disambiguated and grounded, which enables us to directly compute the semantic similarity between a mention in a source language and an entity in a target language (e.g., English), and thus we can perform Cross-lingual Entity Linking in an unsupervised way, without using any training data. In addition, considering each pair of Wikipedia articles connected by an inter-language link as comparable documents, we use this multilingual common space to represent sentences and extract many parallel sentence pairs.The novel contributions of this paper are:• We develop a novel approach based on rich anchor links in Wikipedia to learn crosslingual joint entity and word embedding, so that entity mentions across multiple languages are disambiguated and grounded into one unified common space.• Using this joint entity and word embedding space, entity mentions in any language can be linked to an English knowledge base without any annotation cost. We achieve state-ofthe-art performance on unsupervised crosslingual entity linking.• We construct a rich resource of parallel sentences for 302 2 language pairs along with accurate entity alignment and word alignment.
0
In this digital era we live in, almost everyone is communicating online. As of January 2021, Facebook, YouTube, and WhatsApp each have over 2 billion users, which means many differing viewpoints and perspectives being shared (Statista, 2021) . With such a huge exchange of ideas, there is bound to be some toxicity within the comments. Aside from discouraging users to continue with or join conversations, toxic comments can also taint users' perceptions on news sites (Tenenboim et al., 2019) . Thus it is important to moderate online conversations without fully censoring users.While forums typically rely on human moderators, with such vast amounts of data coming in, it can be difficult for humans to keep up (Nobata et al., 2016) . Advances in deep learning and machine learning is making text processing a viable option to replace, or at least assist, human moderators clean up comment sections (Consultants, 2019) . Some methods rely on simply classifying whether a comment is toxic or not, but identifying what parts of the text are actually toxic can assist moderators and provide insight into what makes language toxic. The SemEval task 5 aims to evaluate systems that detect toxic spans wihtin text using datasets where spans within the comments are labelled as toxic, differing from previously released datasets where whole comments were labelled as toxic or non-toxic (Pavlopoulos et al., 2021) . This is inherently a natural language processing task, similar to text classification and sentiment analysis. This study focuses on training a recurrent neural network to determine the indices of a given string that represent the toxic portions of a comment. Recurrent neural networks are classically used for natural language and sequence labelling task, and one could view this task as a form of sequence labelling. The goal of sequence labelling is, given a sequence as input, assign a sequence of labels. Because recurrent neural networks (RNNs) are flexible in their use of context information and can recognize sequential patterns, they are an attractive and commonly used choice in sequence labelling (Graves, 2012) . This paper approaches the task at hand with a sequence labelling methodology, applying an RNN and comparing the use of gated reccurent unit (GRU) and long-short term memory unit (LSTM) layers in the RNN.
0
A dramatic progress has been achieved in singleturn dialogue modeling such as open-domain response generation (Shang et al., 2015) , question answering (Rajpurkar et al., 2016) , etc. By contrast, multi-turn dialogue modeling is still in its infancy, as users tend to use incomplete utterances which usually omit or refer back to entities or concepts appeared in the dialogue context, namely ellipsis and coreference. According to previous studies, ellipsis and coreference exist in more than 70% of the utterances (Su et al., 2019) , for which a dialogue system must be equipped with the ability of understanding them. To tackle the problem, early works include learning a hierarchical representation (Serban et al., 2017; Zhang et al., 2018) and concatenating the dialogue utterances selectively (Yan et al., 2016) . Recently, researchers focus on a more explicit and explainable solution: the task of Incomplete Utterance Rewriting (IUR, also known as context rewriting) (Kumar and Joshi, 2016; Su et al., 2019; Work done during an internship at Microsoft Research.Utterance (Translation) x 1 (A) 北京今天天气如何 How is the weather in Beijing todayx 2 (B) 北京今天是阴天 Beijing is cloudy todayx 3 (A)为什么总是这样 Why is always thisx * 3 北京为什么总是阴天 Why is Beijing always cloudy Table 1 : An example dialogue between user A and B, including the context utterances (x 1 , x 2 ), the incomplete utterance (x 3 ) and the rewritten utterance (x * 3 ).2019a; Pan et al., 2019; Elgohary et al., 2019; . IUR aims to rewrite an incomplete utterance into an utterance which is semantically equivalent but self-contained to be understood without context. As shown in Table 1 , the incomplete utterance x 3 not only omits the subject "北京"(Beijing), but also refers to the semantic of "阴 天"(cloudy) via "这 样"(this). By explicitly recovering the hidden semantics behind x 3 into x * 3 , IUR makes the downstream dialogue modeling more precise.To deal with IUR, a natural idea is to transfer models from coreference resolution (Clark and Manning, 2016) . However, this idea is not easy to realize, as ellipsis also accounts for a large proportion. Despite being different, coreference and ellipsis both can be resolved without introducing out-of-dialogue words in most cases. That is to say, words of the rewritten utterance are nearly from either the context utterances or the incomplete utterance. Observing it, most previous works employ the pointer network (Vinyals et al., 2015) or the sequence to sequence model with copy mechanism (Gu et al., 2016; See et al., 2017) . However, they generate the rewritten utterance from scratch, neglecting a key trait that the main structure of a rewritten utterance is always the same as the incomplete utterance. To highlight it, we imagine the rewritten utterance as the outcome after a series of edit operations (i.e. substitute and insert) on the incomplete utterance. Taking the example from Table 1 , x * 3 can be obtained by substituting "这样"(this) in x 3 with "阴天"(cloudy) in x 2 and inserting "北京"(Beijing) before "为什 么"(Why), much easier than producing x * 3 via decoding word by word. These edit operations are carried out between word pairs of the context utterances and the incomplete utterance, analogous to semantic segmentation (a well-known task in computer vision): Given relevance features between word pairs as an image, the model is to predict the edit type for each word pair as a pixel-level mask (elaborated in Section 3). Inspired by the above, in this paper, we propose a novel and extensive approach which formulates IUR as semantic segmentation 1 . Our contributions are as follows:• As far as we know, we are the first to present such a highly extensive approach which formulates the incomplete utterance rewriting as a semantic segmentation task.• Benefiting from being able to capture both local and global information, our approach achieves state-of-the-art performance on several datasets across different domains and languages.• Furthermore, our model predicts the edit operations in parallel, and thus obtains a much faster inference speed than traditional methods.
0
Multiword expressions (MWEs) are word combinations idiosyncratic with respect to e.g. syntax or semantics (Baldwin and Kim, 2010) . One of their most emblematic properties is semantic noncompositionality: the meaning of the whole cannot be straightforwardly deduced from the meanings of its components, as in cut corners 'do an incomplete job '. 1 Due to this property and to their frequency (Jackendoff, 1997) , MWEs are a major challenge for semantically-oriented downstream applications, such as machine translation. A prerequisite for an MWE processing is their automatic identification.MWE identification aims at locating MWE occurrences in running text. This task is very challenging, as signaled by Constant et al. (2017) , and further confirmed by the PARSEME shared task on automatic identification of verbal MWEs . One of the main difficulties stems from the variability of MWEs, especially verbal ones (VMWEs). That is, even if a VMWE has previously been observed in a training corpus or in a lexicon, it can re-appear in morphosyntactically diverse forms. Examples (1-2) show two occurrences of a VMWE with variation in the components' inflection (cutting vs. cut), word order, presence of discontinuities (were), and syntactic relations (obj vs. nsubj).(1) Some companies were cutting corners obj to save costs.(2) The field would look uneven if corners nsubj were cut.However, unrestricted variability is not a reasonable assumption either, since it may lead to literal or coincidental occurrences of VMWEs' components (Savary et al., 2019b) , as in (3) and (4), respectively. 2 (3) Start with :::::: cutting one :::::: corner of the disinfectant bag. (4) If you ::: cut along these lines, you'll get two acute ::::::: corners. train dev test # tokens # VMWEs # tokens # VMWEs # seen % seen # tokens # VMWEs # seen % seen FR 432389 4550 56254 629 485 77.1 39489 498 251 50.4 PL 220465 4122 26030 515 387 75.1 27823 515 371 72.0 PT 506773 4430 68581 553 409 74.0 62648 553 397 71.8 RO 781968 4713 118658 589 555 94.2 114997 589 561 92.2 Table 1 : PARSEME shared task corpora for the 4 languages in focus (FR, PL, PT, RO) in terms of the number of tokens, annotated VMWEs and seen VMWEs (those whose multiset of lemmas also appear annotated in train).Our paper addresses VMWE variability, so as to distinguish examples (1-2) from (3-4). We focus on a subproblem of VMWE identification: the identification of previously seen VMWEs. Section 2 describes the corpora and best systems of the PARSEME shared task 1.1, Sections 3 and 4 motivate and describe our system Seen2020 dedicated to the task of seen VMWE identification. Experimental results are shown in Section 5, an interpretation is proposed in Section 6 and we conclude in Section 7.
0
Online multimedia content becomes more and more accessible through digital TV, social networking sites and searchable digital libraries of photographs and videos. People of different ages and cultures attempt to make sense out of this data and re-package it for their own needs, these being informative, educational and entertainment ones. Understanding and generation of multimedia discourse requires knowledge and skills related to the nature of the interacting modalities and their semantic interplay for formulating the multimedia message.Within such context, intelligent multimedia systems are expected to parse/generate such messages or at least assist humans in these tasks. From another perspective, everyday human communication is predominantly multimodal; as such, similarly intuitive human-computer/robot interaction demands that intelligent systems master -among others-the semantic interplay between different media and modalities, i.e. they are able to use/understand natural language and its reference to objects and activities in the shared, situated communication space.It was more than a decade ago, when the lack of a theory of how different media interact with one another was indicated (Whittaker and Walker, 1991) . Recently, such theoretical framework has been developed and used for annotating a corpus of audiovisual documents with the objective of using such corpus for developing multimedia information processing tools (Pastra, 2008) . In this paper, we provide a brief overview of the theory and the corresponding annotated corpus and present a text-based search interface that has been developed for the exploration and the automatic expansion/generalisation of the annotated semantic relations. This search interface is a support tool for the theory and the related corpus and a first step towards its computational exploitation.
0
In recent years, social media platforms in the Arabic region have been evolving rapidly. Twitter provides an easy form of communication that enables users to share information about their activities, opinions, feelings, and views about a wide variety of social events. It has been a great platform to disseminate events as they happen and released immediately, even before they are announced in traditional media. Tweets' contents have become major sources for extracting information about real-world events. Critical events such as violence, disasters, fires, and traffic accidents that need emergency awareness require an extreme effort to detect and track. Twitter users' posts have been utilized as data provider to detect high-risk events with their locations, such as earthquakes (Sakaki et al., 2010) , Traffic incidents (Gu et al., 2016) and floods (Arthur et al., 2018) . An earlier work done by Sakaki et al. (2010) predicted and detected the location of an earthquake in Japan more quickly than the Japan Meteorological Agency. (Gu et al., 2016) identified five categories of traffic incidents in the city of Pittsburgh and Philadelphia (USA) using twitter data. A recent study by Arthur et al. (2018) utilized tweets to locate and detect flood in the UK. Recently, event detection has been considered an active area of researches due to the widespread availability of data in social media. However, researches about event detection on Twitter applying it on Arabic is hampered by the lack of datasets that could be used to design and develop an event detection system. Until now, the dataset of (Almerekhi et al., 2016) and (Alhelbawy et al., 2016) are the only published Arabic datasets for event detection purposes that are freely available for research. To detect an event in the Arabic region, constructing a dataset of Arabic events is mandatory. Leveraging Twitter popularity in Saudi Arabia, we aim to build a dataset containing tweets written in both Modern Standard Arabic (MSA) and Saudi dialect to detect flood, dust storm, and traffic accidents. We focus on the flood, dust storm, and traffic accident events according to their significant 1 https://github.com/BatoolHamawi/FloDusTA influence on human life and economy in Saudi Arabia (Youssef et al, 2015; Karagulian et al.,2019; Mansuri et al., 2015) . To the best of our knowledge, this is the first publicly available Arabic dataset for the aim of detecting flood, dust storm, and traffic accident events. In this paper, the main contributions are:• We describe an Arabic dataset of Saudi event tweets FloDusTA: Flood, Dust Storm, Traffic Accident Saudi Event dataset. The dataset will be publicly available for the research community 1 . • A preliminary set of experiments were conducted to establish a baseline for future work on building an event detection system. The rest of this paper is organized as follows. Section 2 reviews the related works. Section 3 describes how tweets were collected and the cleaning and filtering that were deployed to extract a dataset of Saudi event tweets. In Section 4 we explain the annotation process in detail. In Section 5 the experiments are illustrated. Finally, we conclude and discuss future work.
0
In speech recognition and understanding systems, many kinds of language model may be used to choose between the word and sentence hypotheses for which there is evidence in the acoustic data. Some words, word sequences, syntactic constructions and semantic structures are more likely to occur than others, and the presence of more likely objects in a sentence hypothesis is evidence for the correctness of that hypothesis. Evidence from different knowledge sources can be combined in an attempt to optimize the selection of correct hypotheses; see e.g. Alshawi and Carter (1994) ; Rayner et al (1994) ; Rosenfeld (1994) .Many of the knowledge sources used for this purpose score a sentence hypothesis by calculating a simple, typically linear, combination of scores associated with objects, such as N-grams and grammar rules, that characterize the hypothesis or its preferred linguistic analysis. When these scores are viewed as log probabilities, taking a linear sum corresponds to making an independence assumption that is known to be at best only approximately true, and that may give rise to inaccuracies that reduce the effectiveness of the knowledge source.The most obvious way to make a knowledge source more accurate is to increase the amount of structure or context that it takes account of. For example, a bigram model may be replaced by a trigram one, and the fact that dependencies exist among the likelihoods of occurrence of grammar rules at different locations in a parse tree can be modeled by associating probabilities with states in a parsing table rather than simply with the rules themselves (Briscoe and Carroll, 1993) .However, such remedies have their drawbacks. Firstly, even when the context is extended, some important influences may still not be modeled. For example, dependencies between words exist at separations greater than those allowed for by trigrams (for which long-distance N-grams [Jelinek et al, 1991] are a partial remedy), and associating scores with parsing table states may not model all the important correlations between grammar rules. Secondly, extending the model may greatly increase the amount of training data required if sparseness problems are to be kept under control, and additional data may be unavailable or expensive to collect. Thirdly, one cannot always know in advance of doing the work whether extending a model in a particular direction will, in practice, improve results. If it turns out not to, considerable ingenuity and effort may have been wasted.In this paper, I argue for a general method for extending the context-sensitivity of any knowledge source that calculates sentence hypothesis scores as linear combinations of scores for objects. The method, which is related to that of Iyer, Ostendorf and Rohlicek (1994) , involves clustering the sentences in the training corpus into a number of subcorpora, each predicting a different probability distribution for linguistic objects. An utterance hypothesis encountered at run time is then treated as if it had been selected from the subpopulation of sentences represented by one of these subcorpora. This technique addresses as follows the three drawbacks just alluded to. Firstly, it is able to capture the most important sentence-internal contextual effects regardless of the complexity of the probabilistic dependencies between the objects involved. Secondly, it makes only modest additional demands on training data. Thirdly, it can be applied in a standard way across knowledge sources for very different kinds of object, and if it does improve on the unclustered model this constitutes proof that additional, as yet unexploited relationships exist between linguistic objects of the type the model is based on, and that therefore it is worth looking for a more specific, more powerful way to model them.The use of corpus clustering often does not boost the power of the knowledge source as much as a specific hand-coded extension. For example, a clustered bigram model will probably not be as powerful as a trigram model. However, clustering can have two important uses. One is that it can provide some improvement to a model even in the absence of the additional (human or computational) resources required by a hand-coded extension. The other use is that the existence or otherwise of an improvement brought about by clustering can be a good indicator of whether additional performance can in fact be gained by extending the model by hand without further data collection, with the possibly considerable additional effort that extension would entail. And, of course, there is no reason why clustering should not, where it gives an advantage, also be used in conjunction with extension by hand to produce yet further improvements.As evidence for these claims, I present experimental results showing how, for a particular task and training corpus, clustering produces a sizeable improvement in unigram-and bigram-based models, but not in trigram-based ones; this is consistent with experience in the speech understanding community that while moving from bigrams to trigrams usually produces a definite payoff, a move from trigrams to 4-grams yields less clear benefits for the domain in question. I also show that, for the same task and corpus, clustering produces improvements when sentences are assessed not according to the words they contain but according to the syntax rules used in their best parse. This work thus goes beyond that of Iyer et al by focusing on the methodological im-portance of corpus clustering, rather than just its usefulness in improving overall systemperformance, and by exploring in detail the way its effectiveness varies along the dimensions of language model type, language model complexity, and number of clusters used. It also differs from Iyer et al's work by clustering at the utterance rather than the paragraph level, and by using a training corpus of thousands, rather than millions, of sentences; in many speech applications, available training data is likely to be quite limited, and may not always be chunked into paragraphs.
0
Recent work demonstrated that word embeddings induced from large text collections encode many human biases (e.g., Bolukbasi et al., 2016; Caliskan et al., 2017) . This finding is not particularly surprising given that (1) we are likely project our biases in the text that we produce and (2) these biases in text are bound to be encoded in word vectors due to the distributional nature (Harris, 1954) of the word embedding models (Mikolov et al., 2013a; Pennington et al., 2014; Bojanowski et al., 2017) . For illustration, consider the famous analogy-based gender bias example from Bolukbasi et al. (2016) : "Man is to computer programmer as woman is to homemaker". This bias will be reflected in the text (i.e., the word man will co-occur more often with words like programmer or engineer, whereas woman will more often appear next to homemaker or nurse), and will, in turn, be captured by word embeddings built from such biased texts. While biases encoded in word embeddings can be a useful data source for diachronic analyses of societal biases (e.g., Garg et al., 2018) , they may cause ethical problems for many downstream applications and NLP models.In order to measure the extent to which various societal biases are captured by word embeddings, Caliskan et al. (2017) proposed the Word Embedding Association Test (WEAT). WEAT measures semantic similarity, computed through word embeddings, between two sets of target words (e.g., insects vs. flowers) and two sets of attribute words (e.g., pleasant vs. unpleasant words). While they test a number of biases, the analysis is limited in scope to English as the only language, GloVe (Pennington et al., 2014) as the embedding model, and Common Crawl as the type of text. Following the same methodology, McCurdy and Serbetci (2017) extend the analysis to three more languages (German, Dutch, Spanish), but test only for gender bias.In this work, we present the most comprehensive study of biases captured by distributional word vector to date. We create XWEAT, a collection of multilingual and cross-lingual versions of the WEAT dataset, by translating WEAT to six other languages and offer a comparative analysis of biases over seven diverse languages. Furthermore, we measure the consistency of WEAT biases across different embedding models and types of corpora. What is more, given the recent surge of models for inducing cross-lingual embedding spaces (Mikolov et al., 2013a; Hermann and Blunsom, 2014; Smith et al., 2017; Conneau et al., 2018; Artetxe et al., 2018; Hoshen and Wolf, 2018, inter alia) and their ubiquitous application in cross-lingual transfer of NLP models for downstream tasks, we investigate cross-lingual biases encoded in cross-lingual embedding spaces and compare them to bias effects present of corresponding monolingual embeddings.Our analysis yields some interesting findings: biases do depend on the embedding model and, quite surprisingly, they seem to be less pronounced in embeddings trained on social media texts. Furthermore, we find that the effects (i.e., amount) of bias in cross-lingual embedding spaces can roughly be predicted from the bias effects of the corresponding monolingual embedding spaces.
0
The need for in-domain data in machine learning is a wellestablished problem and should be well motivated in previous papers (e.g [1] ). We briefly observe, however, that across domains system performance is tied to the similarity between training and testing data. The testing data used for guiding system development is almost synonymous with in-domain data. It follows directly that training data should also resemble the in-domain as closely as possible. In-domain data however is also almost always the most limited kind. This necessitates supplementing it with out-of-domain or nondomain-specific data in order to achieve satisfactory model estimates.In this paper we consider the training of language models for speech recognition and machine translation of uni-versity lectures, which are very domain-specific. Typically this means adapting existing systems to a new topic. Perhaps unique to this application is that the in-domain data for lectures is normally of a very small size. A one-hour lecture may produce under a thousand utterances and roughly ten thousand words. The necessity of rapid system development and testing in this context encourages us to limit training data size. What we want, then is a way to reduce large amounts of data and at the same time improve its relevance. Ideally we would also be able to do so using only a very small amount of in-domain data.We improve the work of [2] by drawing a better representative sample of out-of-domain data and language model (LM) vocabulary. However, more centrally, we extend the work of [2] by using a word-association based on a broad definition of similarity to extend these language models. With this extension, we do not compare solely the exact matching words from in-domain and out-of-domain corpora, but also their semantically associated words. These semantic associations can be inferred, as in the example of this paper through the use of pre-existing non-domain-specific parallel and/or monolingual corpora, or through hand-made thesauri. Then with a small amount of in-domain data we use the aforementioned extended language models to rank and select out-ofdomain sentences.The starting point and reference of our work is that found in [2] , which is to our knowledge one of the most recent and popular methods in a series of methods on data selection [3, 4, 5] . Their approach assumes the availability of enough in-domain data to train a reasonable in-domain LM, which is used to compute a cross-entropy score for the outof-domain sentences. The sentence is also scored by another, out-of-domain LM resulting from a similar-sized random out-of-domain sample. If the difference between these two scores exceeds a certain threshold the sentence is retained, the threshold being tuned on a small heldout in-domain set. This approach can be qualified as one based on the perplexity of the out-of-domain data. The in-domain data used in [2] is the EPPS corpus, which contains more than one million sentences. This stands in contrast to the lecture case with very specific domains and very limited data sizes. The authors report their results in terms of perplexity, for which their technique outperforms a baseline selection method by twenty absolute points. Their approach has been shown to be effective for selecting LM training data, at least from the perspective of a Statistical Machine Translation (SMT) system with a specific domain task [6, 7, 8] . We note that the main task of these systems was to translate TED talks. 1 The work in [2] was extended to parallel data selection by [9, 10] . However, the last work concludes that the approach is less effective in the parallel case.The approach of differential LM scores used in the aforementioned papers has a long history in the information retrieval (IR) domain [11, 12] . However, only unigram language models are considered in the context of IR, since the order in this task is meaningless.Enriching the LM capability by incorporating word relationships has also been proposed in IR and is referred to as a translation model therein [13, 14] . 2 More closely related to our approach, [15] uses word similarities to extend LMs in all orders. They show that extended LMs with properly computed word similarities significantly improve their performance at least in a speech recognition task.The translation of talks and lectures between natural languages has gained attention in recent years, with events such as the International Workshop on Spoken Language Translation (IWSLT) sponsoring evaluations of lecture translation systems for such material as TED talks. From the perspective of Automatic Speech Recognition (ASR), talks and lectures are an interesting domain where the current state of the art can be advanced, as the style of speaking is thought to lie somewhere between spontaneous and read speech.As noted previously, university lectures in particular are very domain-specific and thus in-domain data tends to be quite limited. The typical approach for language modeling in such a scenario is to include as much data as possible, both in-and out-of-domain, and allow weighted interpolation to select the best mixture based on some heldout set. However, if a satisfactory method could be found to choose only those parts of the out-of-domain set most similar to the in-domain set, this would reduce the amount of necessary LM training data. Not only would this save training time, it would also produce LMs that are smaller and possibly more adapted to the task at hand.We perform text selection using variations of our technique and train language models on the resulting selected data. These LMs are then evaluated in terms of their perplexity on a heldout set, the word-error-rate of a speech recogniser, and an SMT system using the LM. We also apply the technique of [2] to our selection task as a reference.The remainder of the paper is structured as follows. In section 2 we describe the theory behind our enhancements to the standard selection algorithm. First, we discuss our method of intelligently selecting the out-of-domain LM used for crossentropy selection. Next, we discuss our experiments with a more careful selection of the cross-entropy in-domain and out-of-domain language model vocabularies. In section 3.1 we introduce our association-based approach. We describe how we compute lexicons and how we use them to extend the cross-entropy language models. The results of our experiments are presented in section 5. We end the paper with section 6 in which we draw conclusions and discuss future work.
0
Fact verification aims to verify whether a fact is entailed or refuted by the given evidence, which has attracted increasing attention. Recent researches mainly focus on the unstructured text as the evidence and ignoring the evidence with the structured or semi-structured format. A recently proposed dataset TABFACT (Wenhu Chen and Wang, 2020) fills this gap, which is designed to deal with the table-based fact verification problem, namely, verifying whether a statement is correct by the given semi-structured table evidence.It is well accepted that symbolic information (such as count and only) plays a great role in understanding semi-structured evidence based statements (Wenhu Chen and Wang, 2020) . However, most existing approaches for fact verification (Thorne et al., 2018; Nie et al., 2019; Zhong et al., 2020b; Soleimani et al., 2020) focus on the understanding of natural language, namely, linguistic reasoning, but fail to consider symbolic information, which plays an important role in complex reasoning (Liang et al., 2017; Dua et al., 2019; . Due to the diversity of natural language expressions, it is difficult to capture symbolic information effectively from natural language directly. Consequently, how to leverage symbolic information effectively becomes a crucial problem. To alleviate this problem, Zhong et al. (2020a) propose a graph module network that concatenates graphenhanced linguistic-level representations and program-guided symbolic-level representations together to predict the labels. However, their method focuses on the representation of symbolic information, rather than take advantage of the combination of both types of information. More specifically, we believe that the concatenation operation between two types of representations is not effective enough to leverage the linguistic information and symbolic information to perform reasoning.In recent studies, graph neural networks show their powerful ability in dealing with semi-structured data (Bogin et al., 2019a; Bogin et al., 2019b) . Under this consideration, we propose to use graph neural networks that learn to combine linguistic information and symbolic information in a simultaneous fashion. Since the representations of different types of information fall in different embedding spaces, Figure 1 : Example of the TABFACT dataset, which are expected to combine both linguistic information in the statement and the table and symbolic information in the programs. Given a table and a statement, the goal is to predict whether the label is ENTAILED or REFUTED. Program is a kind of LISP-like logical form. The program synthesis and selection process are described in Section 3.2. a heterogeneous graph structure is suitable to reason and aggregate over different types of nodes to combine different types of information.In this paper, we propose a heterogeneous graph-based neural network for table-based fact verification named HeterTFV, to learn to combine linguistic information and symbolic information. Given a statement and a table, we first generate programs with the latent program algorithm (LPA) algorithm proposed by Wenhu Chen and Wang (2020) . After that, we construct a program graph to capture the inner structure in the program and use gated graph neural network to encode the programs to learn the semantic compositionality. Then a heterogeneous graph is constructed with statement nodes, table nodes, and program nodes to incorporate both linguistic information and symbolic information, which is expected to exploit the structure in the table and build connections among the statement, table, and programs. Finally, a graph-based neural network is proposed to reason over the constructed heterogeneous graph, which enables the message passing processes of different types of nodes to achieve the purpose to combine linguistic information and symbolic information.We conduct experiments on the TABFACT (Wenhu Chen and Wang, 2020), a large-scale benchmark dataset for table-based fact verification. Experimental results show that our model outperforms all baselines and achieves state-of-the-art performance.In summary, the main contributions of this paper are three-fold:• We construct a heterogeneous graph by introducing program nodes, to incorporate both linguistic information and symbolic information.• We propose a graph-based approach to reason over the constructed heterogeneous graph to perform different types of message passing processes, which makes an effective combination of linguistic information and symbolic information.• Experimental results on the TABFACT dataset illustrate the advantage of our proposed heterogeneous graph-based approach: our model outperforms all the baseline systems and achieves a new state-of-the-art performance.
0
Hybrid representation systems have been explored before [9, 24, 31] , but until now only one has been used in an extensive natural language processing system. KL-TWO [31] , based on a propositional logic, was at the core of the mapping from formulae to lexical items in the Penman generation system [28] . In this paper we report some of the design decisions made in creating a hybrid of an intensional logic with a taxonomic language for use in Janus, BBN's natural language system, consisting of the IRUS-II understanding components [5] and the Spokesman generation components. To our knowledge, this is the first hybrid approach using an intensional logic, and the first time a hybrid representation system has been used for understanding.In Janus, the meaning of an utterance is represented as an expression in WML (World Model Language) [15] , which is an intensional logic. However, a logic merely prescribes the framework of semantics and of ontology.The descriptive constants, that is the individual constants (functions with no arguments), the other function symbols, and the predicate symbols, are abstractions without any detailed commitment to ontology. (We will abbreviate descriptive constants throughout the remainder of this paper as constants.)Axioms stating the relationships between the constants are defined in NIKL [8, 22] . We wished to explore whether a language with limited expressive power but fast reasoning procedures is adequate for core problems in natural language processing. The NIKL axioms constrain the set of possible models for the logic in a given domain.Though we have found clear examples that argue for more expressive power than NIKL provides, 99.9% of the examples in our expert system and data bass applications have fit well within the constraints of NIKL. Based on our experience and that of others, the axioms and limited inference algorithms can be used for classes of anaphora resolution, interpretation of highly polysemous or vague words such as have and with, finding omitted relations in novel nomina/ compounds, and selecting modifier attachment based on selection restrictions. Sections 2 and 3 describe the rationale for our choices in creating this hybrid. Section 4 illustrates how the hybrid is used in Janus. Section 5 briefly summarizes some experience with domainindependent abstractions for organizing constants of the domain. Section 6 identifies related hybrids, and Section 7 summarizes our conclusions.
0
Automatic word alignment can be defined as the problem of determining translational correspondences at word level given a parallel corpus of aligned sentences. Bilingual word alignment is a fundamental component of most approaches to statistical machine translation (SMT). Dominant approaches to word alignment can be classified into two main schools: generative and discriminative word alignment models.Generative word alignment models, initially developed at IBM (Brown et al., 1993) , and then augmented by an HMM-based model (Vogel et al., 1996) , have provided powerful modeling capability for word alignment. However, it is very difficult to incorporate new features into these models. Discriminative word alignment models, based on discriminative training of a set of features (Liu et al., 2005; Moore, 2005) , on the other hand, are more flexible to incorporate new features, and feature selection is essential to the performance of the system. Syntactic annotation of bilingual corpora, which can be obtained more efficiently and accurately with the advances in monolingual language processing, is a potential information source for word alignment tasks. For example, Part-of-Speech (POS) tags of source and target words can be used to tackle the data sparseness problem in discriminative word alignment (Liu et al., 2005; Blunsom and Cohn, 2006) . Shallow parsing has also been used to provide relevant information for alignment (Ren et al., 2007; Sun et al., 2000) . Deeper syntax, e.g. phrase or dependency structures, has been shown useful in generative models (Wang and Zhou, 2004; Lopez and Resnik, 2005 ), heuristic-based models (Ayan et al., 2004; Ozdowska, 2004) and even for syntactically motivated models such as ITG (Wu, 1997; Cherry and Lin, 2006) .In this paper, we introduce an approach to improve word alignment by incorporating syntactic dependencies. Our approach is motivated by the fact that words tend to be dependent on each other. If we can first obtain a set of reliable anchor links, we could take advantage of the syntactic dependencies relating unaligned words to aligned anchor words to expand the alignment. Figure 1 gives an illustrating example. Note that the link (2, 4) can be easily identified, but the link involving the fourth Chinese word (a function word denoting 'time') (4, 4) is hard. In such cases, we can make use of the dependency relationship ('tclause') between c 2 and c 4 to help the alignment process. Given such an observation, our model is composed of two related alignment models. The first one is an anchor alignment model which is used to find a set of anchor links; the other one is a syntax-enhanced alignment model aiming to process the words left unaligned after anchoring. The remainder of this paper is organized as follows. In Section 2, we introduce our syntaxenhanced discriminative word alignment approach. The feature functions used are described in Section 3. Experimental setting and results are presented in Section 4 and 5 respectively. In Section 6, we compare our approach with other related word alignment approaches. Section 7 concludes the paper and gives avenues for future work.
0
Keywords are words (or multi-word expressions) that best describe the subject of a document, effectively summarise it and can also be used in several document categorization tasks. In online news portals, keywords help with efficient retrieval of articles when needed. Similar keywords characterise articles of similar topics, which can help editors to link related articles, journalists to find similar articles and readers to retrieve articles of interest when browsing the portals. For journalists manually assigning tags (keywords) to articles represents a demanding task, and high-quality automated keyword extraction shows to be one of components in news digitalization process that many media houses seek for.The task of keyword extraction can generally be tackled in an unsupervised way, i.e., by relying on frequency based statistical measures (Campos et al., 2020) or graph statistics (Škrlj et al., 2019) , or with a supervised keyword extraction tool, which requires a training set of sufficient size and from appropriate domain. While supervised methods tend to work better due to their ability to adapt to a specifics of the syntax, semantics, content, genre and keyword assignment regime of a specific text (Martinc et al., 2020a) , their training for some less resource languages is problematic due to scarcity of large manually annotated resources. For this reason, studies about supervised keyword extraction conducted on less resourced languages are still very rare. To overcome this research gap, in this paper we focus on supervised keyword extraction on three less resourced languages, Croatian, Latvian, and Estonian, and one fairly well resourced language (Russian) and conduct experiments on data sets of media partners in the EMBEDDIA project 1 . The code for the experiments is made available on GitHub under the MIT license 2 .In media house environments, automatic keyword extraction systems are expected to return a diverse list of keyword candidates (of constant length), which is then inspected by a journalist who manually selects appropriate candidates. While the state-of-the-art supervised approaches in most cases offer good enough precision for this type of usage as a recommendation system, the recall of these systems is nevertheless problematic. Supervised systems learn how many keywords should be returned for each news article on the gold standard train set, which generally contains only a small amount of manually approved candidates for each news article. For example, among the datasets used in our experiments (see Section 3), the Russian train set contains the most (on average 4.44) present keywords (i.e., keywords which appear in the text of the article and can be used for training of the supervised models) per article, while the Croatian test set contains only 1.19 keywords per article. This means that for Croatian, the model will learn to return around 1.19 keywords for each article, which is not enough.To solve this problem we show that we can improve the recall of the existing supervised keyword extraction system by:• Proposing an additional TF-IDF tagset matching technique, which finds additional keyword candidates by ranking the words in the news article that have appeared in the predefined keyword set containing words from the gold standard train set. The new hybrid system first checks how many keywords were returned by the supervised approach and if the number is smaller than needed, the list is expanded by the best ranked keywords returned by the TF-IDF based extraction system.• Combining the outputs of several state-of-theart supervised keyword extraction approaches.The rest of this work is structured as follows: Section 2 presents the related work, while Section 3 describes the datasets on which we evaluate our method. Section 4 describes our proposed method with all corresponding steps. The experiment settings are described in Section 5 and the evaluation of the proposed methods is shown in Section 6. The conclusions and the proposed further work are presented in Section 7.
0
While standard information retrieval (IR) systems present the results of a query in the form of a ranked list of relevant documents, question answering (QA) systems attempt to return them in the form of sentences (or paragraphs, or phrases), responding more precisely to the user's request.However, in most state-of-the-art QA systems the output remains independent of the questioner's characteristics, goals and needs. In other words, there is a lack of user modelling: a 10-year-old and a University History student would get the same answer to the question: "When did the Middle Ages begin?". Secondly, most of the effort of current QA is on factoid questions, i.e. questions concerning people, dates, etc., which can generally be answered by a short sentence or phrase (Kwok et al., 2001 ). The main QA evaluation campaign, TREC-QA 1 , has long focused on this type of questions, for which the simplifying assumption is that there exists only one correct answer. Even recent TREC campaigns (Voorhees, 2003; Voorhees, 2004) do not move sufficiently beyond the factoid approach. They account for two types of nonfactoid questions -list and definitional-but not for non-factoid answers. In fact, a) TREC defines list questions as questions requiring multiple factoid answers, b) it is clear that a definition question may be answered by spotting definitional passages (what is not clear is how to spot them). However, accounting for the fact that some simple questions may have complex or controversial answers (e.g. "What were the causes of World War II?") remains an unsolved problem. We argue that in such situations returning a short paragraph or text snippet is more appropriate than exact answer spotting. Finally, QA systems rarely interact with the user: the typical session involves the user submitting a query and the system returning a result; the session is then concluded.To respond to these deficiencies of existing QA systems, we propose an adaptive system where a QA module interacts with a user model and a dialogue interface (see Figure 1 ). The dialogue interface provides the query terms to the QA module, and the user model (UM) provides criteria to adapt query results to the user's needs. Given such information, the goal of the QA module is to be able to discriminate between simple/factoid answers and more complex answers, presenting them in a TREC-style manner in the first case and more appropriately in the second. Related work To our knowledge, our system is among the first to address the need for a different approach to non-factoid (complex/controversial) answers. Although the three-tiered structure of our QA module reflects that of a typical webbased QA system, e.g. MULDER (Kwok et al., 2001 ), a significant aspect of novelty in our architecture is that the QA component is supported by the user model. Additionally, we drastically reduce the amount of linguistic processing applied during question processing and answer generation, while giving more relief to the post-retrieval phase and to the role of the UM.
0
The algorithm described in this paper is concerned with using hidden Markov methods for estimation of the parameters of a stochastic context-free grammar from free text. The Forward/Backward (F/B) algorithm (Baum, 1972) is capable of estimating the parameters of a hidden Markov model (i.e. a hidden stochastic regular grammar) and has been used with success to train text taggers (Jehnek, 1985) . In the tagging apphcation the observed symbols are words and their underlying lexical categories are the hidden states of the model. A context-free grammar comprises both lexical (terminai) categories and grammatical (nonterminai) categories. One iterative method of estimation in this case involves parsing each sentence in the training corpus and for each derivation, accumulating counts of the number of times each rule is used. This method has been used by Fujisald et ai. (1989) , and Chitrao & Grishman (1990) . A more efficient method is the Inside/Outside algorithm, devised by Baker (1979) for grammars that are expressed in Chomsky normal form. The algorithm described in this paper relaxes the requirement for a grammar to be expressed in a normal form, and it is based on a trellis representation that is closely related to the F/B algorithm, and which reduces to it for finite-state networks.The development of the algorithm has various motivations. Grammars must provide a large coverage to accommodate the diversity of expression present in large collections of unrestricted text. As a result they become more ambiguous. A stochastic grammar provides the capability to resolve ambiguity on a probabilistic basis, providing a practical approach to the problem. It also provides a way of modeling conditional dependence for incomplete grammars, or in the absence of any specific structural information. The latter is exemplified by the approach taken in many current taggers, which have a uniform model of second-order dependency between word categories. Kupiec (1989) has experimented with the inclusion of networks to model mixed-order dependencies.The use of hidden Markov methods is motivated by the flexibility they afford. Text corpora from any domain can be used for training, and there are no restrictions on a grammar due to conventions used during labehng. The methods also lend themselves to multi-hngual application.The representation used by the algorithm can be related to constituent structures used in other parsers such as chart parsers, providing a means of embedding this technique in them.The representation of a grammar and the basic trellis structure are discussed in this section. The starting point is the conventional HMM network in which symbols are generated at states (rather than on transitions) as described in Levinson et al. (1983) . Such a network is represented by the parameter set (A, B, I) comprising the transition, output and initial matrices. The states in this kind of network will be referred to as terminal states from now on, and Will be represented pictorially with single circles. As a shorthand convenience in what follows, if the circle contains a symbol, then it is assumed that only that symbol is ever generated by the state. (The probability of generating it is then unity, and zero for all other symbols.) A single symbol is generated by a transition to a terminal state. For the grammars considered here, terminal states correspond to lexical categories.To this parameter set we will add four other parameters (N, F, To2, L). The boolean Top indicates whether the network is to be considered as the top-level network. Only one network may be assigned as the top-level network, and it is analogous to the root symbol of a grammar. The parameter F is the set of final states, specifying the allowable states in which a network can be considered to have accepted a sequence of observations. A different type of state will now be introduced, called a nonterminal state. It represents a reference to another network and is indicated diagrammatically with two concentric circles. When a transition is made to a nonterminal state, the state does not generate any observations per se, but terminal nodes within the referred network do. A nonterminal state may be associated with a sequence of observation symbols, corresponding to the sequence accepted by the underlying network. The parameter N is a matrix which indicates whether a state is a terminal or nonterminal state. Terminal states have a null entry in the matrix, and nonterminal states have a reference to the network which they represent. A grammar is usually composed of several networks, so each one is referred to with a unique label L. Figure 1 shows how rules in Chomsky normal form are represented as networks using the above scheme. The lexical form of the rules is included, illustrating how the left hand side of a rule corresponds to a network label, and the network structure is associated with the right-hand side. Terminal states are labeled in lower case and nonterminals in upper case. The numbers associated with the states are their initial probabilities which are also rule probabilities. For terminal nodes in the top-level network, initial probabilities have the same meaning as in the F/B algorithm. For all other networks, an initial probabihty corresponds to a production probability. States which have a non-zero initial probability will be termed "Initial states" from now on. Any sequence recognized by a network must start on an initial state and end on a final state. In Figure 1 , final states are designated with the annotation "F'. It is useful to define a lookup function W(y) which returns the index k of the vocabulary entry vk matching the word Wy at positioh y in the sentence. The vocabulary entry may be a word or an equivalence class based on categories (Kupiec, 1989 ). An element of the output matrix B, representing the the probability of seeing word wy in terminal state j is then b(j,W(y)). In addition, three sets will be mentioned:1. Term(n) The set of terminal states in network n.
0
Sociologists have defined culture as a set of shared understandings, herein called perspectives, adopted by the members of that culture (Bar-Tal, 2000; Sperber and Hirschfeld, 2004) . Languages and cultures have radical correlations (Khaslavsky, 1998; Bracewell and Tomlinson, 2012; Gelman and Roberts, 2017) , because individuals communicate with each other by language, which carries the aspects of their cultures, experiences, beliefs, and values, thus will shape their perspectives. Lacking of understanding for these perspective differences could lead to biased predictions. Selection bias (Heckman, 1977) can often lead to misinformation as it sometimes ignores facts that do not reflect the entire population intended to be analyzed. For example, to verify a controversial statement like "The Claim The free market does a much worse job than the government in providing essential services and the fraud and corruption part only gets worse. CN Persp Human: 72% support, Model: 79% support JP Persp Human: 17% support, Model: 15% support (a) A claim about free market and government intervention from our test data, with the distributional perspectives of the Chinese (CN) and Japanese (JP) colingual groups. Human opinions and model predictions are highly correlated.中国特色的社会主义现阶段有如下特点: 以 国家的手段控制国内的要害经济部门和大 量的企业,通过"国有资产"的概念以股份或 者非股份形式保护国民经济的相当重要的 部分。 The current stage of socialism with Chinese characteristics has the following characteristics: the government control the vital economic sectors and a large number of enterprises in the country by state means, and protect a very important part of the national economy in the form of shares or non-shares through the concept of "state-owned assets". [Translated] JP Wikipedia 1930年以降、社会的市 にして人の自由や 市原理を再 し、政府による人や市への介 入は最低限とすべきと提唱する。... 日本 では1950年の 事再成以来、民の力会社が 地域ごとに1社ずつ合10社あり。 Since 1930, Japan reassessed the liberty and market principles of the individual for the social market economy, advocating that government intervention in the individual and the market should be minimized. ... In Japan, since the restructuring of the electric power business in 1950, there are 10 private electric power companies, one in each region. [Translated] (b) Evidence from Wikipedia pages from the colingual groups (CN and JP), that potentially are for or against the claim shown in Table 1a . These are included in our training data after variation (discussed in Section 4.2). The two examples in the JP corpus are selected out from different articles. free market causes fraud and corruption.", we need to consider the perspectives from various groups (shown in Table 1 ). Similarly, a sentiment analysis model may fail to capture the correct emotions towards a debatable claim if the claim is viewed differently across different groups, such as the dispute between India and Pakistan regarding Kashmir.In this paper, we focus on distributional differences on controversial topics across groups. For example, within the United States, people have split views (approximately half-half) regarding gun control and abortion, while in China, people generally against the possession of guns and pro-choice for abortion. Hence, building a culture-aware model that considers groups' distributional perspectives will help improve comprehension and consequentially mitigate biases in decision making.We aim to identify colingual groups' distributional perspectives towards a given claim, and spot claims that provoke such divergence. As colingual groups are naturally identifiable by the usage of language, we can obviate group detection and associated errors in the process of group identification. 2 Wikipedia, despite its overall goal of objectivity, has been shown to embed latent cultural biases (Callahan and Herring, 2011) . Following these cues, we believe Wikipedia is an ideal source to study diverse perspectives among various colingual groups. Table 1a shows an example claim for which the Chinese and Japanese may have different opinions. Specifically, the Chinese-speaking group tends to support the claim (72% support) while the Japanese-speaking group tends to oppose it (17% support), which is likely due to the different economic/government environments. As shown in Table 1b , we can find evidences from wikipedia pages that support or oppose the claim in Table 1a .We learn a perspective model for each colingual groups using a collection of Wikipedia pages for English, Chinese and Japanese, and then use these models to identify diverging perspectives for a separate set of claims that are manually curated and are not from Wikipedia.Our contributions are as follows. 1) We propose CLUSTER (CoLingUal PerSpecTive IdentifiER), a module that learns distributional perspectives of colingual groups based on Wikipedia articles. Towards this, we develop a novel procedure to algorithmically generate negative examples (introduced in Section 3.1) based on Wikipedia to train our group models (Section 4.1). 2) We design an evaluation framework to systematically study the effectiveness of the proposed approach by testing our models on self-labeled claims from diverse topics including cuisine, festivals, marriage, corruption, democracy, privacy, etc. (Section 3.2, 3.3 and 4.3) 3) Comprehensive quantitative and qualitative studies in Chinese, Japanese, and English show that our model outperforms multiple well-crafted baselines and achieves strong correlation with human judgements. 3 (Section 6 and 7)
0
With the rapid development of e-commerce, massive user reviews available on e-commerce platforms are becoming valuable resources for both customers and merchants. Aspect-based sentiment analysis(ABSA) on user reviews is a fundamental and challenging task which attracts interests from both academia and industries (Hu and Liu, 2004; Ganu et al., 2009; Jo and Oh, 2011; Kiritchenko et al., 2014) . According to whether the aspect terms are explicitly mentioned in texts, ABSA can be further classified into aspect term sentiment anal-ysis (ATSA) and aspect category sentiment analysis (ACSA), we focus on the latter which is more widely used in industries. Specifically, given a review "Although the fish is delicious, the waiter is horrible!", the ACSA task aims to infer the sentiment polarity over aspect category food is positive while the opinion over the aspect category service is negative.The user interfaces of e-commerce platforms are more intelligent than ever before with the help of ACSA techniques. For example, Figure 1 presents the detail page of a coffee shop on a popular ecommerce platform in China. The upper aspectbased sentiment text-boxes display the aspect categories (e.g., food, sanitation) mentioned frequently in user reviews and the aggregated sentiment polarities on these aspect categories (the orange ones represent positive and the blue ones represent negative). Customers can focus on corresponding reviews effectively by clicking the aspect-based sentiment text-boxes they care about (e.g., the orange filled text-box "卫生条件好" (good sanitation)). Our user survey based on 7, 824 valid questionnaires demonstrates that 80.08% customers agree that the aspect-based sentiment text-boxes are helpful to their decision-making on restaurant choices. Besides, the merchants can keep track of their cuisines and service qualities with the help of the aspect-based sentiment text-boxes. Most Chinese e-commerce platforms such as Taobao 1 , Dianping 2 , and Koubei 3 deploy the similar user interfaces to improve user experience.Users also publish their overall 5-star scale ratings together with reviews. Figure 1 displays a sample of 5-star rating to the coffee shop. In comparison to fine-grained aspect sentiment, the overall review rating is usually a coarse-grained synthesis of the opinions on multiple aspects. Rating pre-2070 diction(RP) (Jin et al., 2016; Wu et al., 2019a ) which aims to predict the "seeing stars" of reviews also has wide applications. For example, to promise the aspect-based sentiment text-boxes accurate, unreliable reviews should be removed before ACSA algorithms are performed. Given a piece of user review, we can predict a rating for it based on the overall sentiment polarity underlying the text. We assume the predicted rating of the review should be consistent with its groundtruth rating as long as the review is reliable. If the predicted rating and the user rating of a review disagree with each other explicitly, the reliability of the review is doubtful. Figure 2 demonstrates an example review of low-reliability. In summary, RP can help merchants to detect unreliable reviews.Therefore, both ACSA and RP are of great importance for business intelligence in e-commerce, and they are highly correlated and complementary. ACSA focuses on predicting its underlying sentiment polarities on different aspect categories, while RP focuses on predicting the user's overall feelings from the review content. We reckon these two tasks are highly correlated and better performance could be achieved by considering them jointly.As far as we know, current public datasets are constructed for ACSA and RP separately, which limits further joint explorations of ACSA and RP. To address the problem and advance the related researches, this paper presents a large-scale Chinese restaurant review dataset for Aspect category Sentiment Analysis and rating Prediction, denotes as ASAP for short. All the reviews in ASAP are collected from the aforementioned e-commerce platform. There are 46, 730 restaurant reviews attached with 5-star scale ratings. Each review is manually annotated according to its sentiment polarities towards 18 fine-grained aspect categories. To the best of our knowledge, ASAP is the largest Chinese large-scale review dataset towards both ACSA and RP tasks.We implement several state-of-the-art (SOTA) baselines for ACSA and RP and evaluate their performance on ASAP. To make a fair comparison, we also perform ACSA experiments on a widely used SemEval-2014 restaurant review dataset (Pontiki et al., 2014) . Since BERT (Devlin et al., 2018) has achieved great success in several natural language understanding tasks including sentiment analysis Sun et al., 2019; Jiang et al., 2019) , we propose a joint model that employs the fine-to-coarse semantic capability of BERT. Our joint model outperforms the competing baselines on both tasks. Figure 1: The user interface of a coffee shop on a popular e-commerce App. The top aspect-based sentiment text-boxes display aspect categories and sentiment polarities. The orange text-boxes are positive, while the blue ones are negative. The reviews mentioning the clicked aspect category (e.g., good sanitation) with ratings are shown below. The text spans mentioning the aspect categories are also highlighted.EQUATION. . , ,. . Our main contributions can be summarized as follows. (1) We present a large-scale Chinese review dataset towards aspect category sentiment analysis and rating prediction, named as ASAP, including as many as 46, 730 real-world restaurant reviews annotated from 18 pre-defined aspect categories. Our dataset has been released at https: //github.com/Meituan-Dianping/asap. (2) We explore the performance of widely used models for ACSA and RP on ASAP. 3We propose a joint learning model for ACSA and RP tasks. Our model achieves the best results both on ASAP and SemEval RESTAURANT datasets.
0
Query understanding has been an important research area in information retrieval and natural language processing (Croft et al., 2010) . A key part of this problem is entity linking, which aims to annotate the entities in the query and link them to a knowledge base such as Freebase and * Contribution during internship at Microsoft Research.Wikipedia. This problem has been extensively studied over the recent years (Ling et al., 2015; Usbeck et al., 2015; Cornolti et al., 2016) .The mainstream methods of entity linking for queries can be summed up in three steps: mention detection, candidate generation, and entity disambiguation. The first step is to recognize candidate mentions in the query. The most common method to detect mentions is to search a dictionary collected by the entity alias in a knowledge base and the human-maintained information in Wikipedia (such as anchors, titles and redirects) (Laclavik et al., 2014) . The second step is to generate candidates by mapping mentions to entities. It usually uses all possible senses of detected mentions as candidates. Hereafter, we refer to these two steps of generating candidate entities as entity search. Finally, they disambiguate and prune candidate entities, which is usually implemented with a ranking framework.There are two main issues in entity search. First, a mention may be linked to many entities. The methods using entity search usually leverage little context information in the query. Therefore it may generate many completely irrelevant entities for the query, which brings challenges to the ranking phase. For example, the mention "Austin" usually represents the capital of Texas in the United States. However, it can also be linked to "Austin, Western Australia", "Austin, Quebec", "Austin (name)", "Austin College", "Austin (song)" and 31 other entities in the Wikipedia page of "Austin (disambiguation)". For the query "blake shelton austin lyrics", Blake Shelton is a singer and made his debut with the song "Austin". The entity search method detects the mention "austin" using the dictionary. However, while "Austin (song)" is most related to the context "blake shelton" and "lyrics", the mention "austin" may be linked to all the above entities as candidates. Therefore candidate gener-ation with entity search generates too many candidates especially for a common anchor text with a large number of corresponding entities. Second, it is hard to recognize entities with common surface names. The common methods usually define a feature called "link-probability" as the probability that a mention is annotated in all documents. There is an issue with this probability being static whatever the query is. We show an example with the query "her film". "Her (film)" is a film while its surface name is usually used as a possessive pronoun. Since the static link-probability of "her" from all Wikipedia articles is very low, "her" is usually not treated as a mention linked to the entity "Her (film)".In this paper, we propose a novel approach to generating candidates by searching sentences from Wikipedia articles and directly using the humanannotated entities as the candidates. Our approach can greatly reduce the number of candidate entities and obtain the query sensitive prior probability. We take the query "blake shelton austin lyrics" as an example. Below we show a sentence in the Wikipedia page of "Austin (song)".[[Austin (song)|Austin]] is the title of a debut song written by David Kent and Kirsti Manna, and performed by American country music artist [[Blake Shelton]]. Table 1 : A sentence in the page "Austin (song)".In the above sentence, the mentions "Austin" and "Blake Shelton" in square brackets are annotated to the entity "Austin (song)" and "Blake Shelton", respectively. We generate candidates by searching sentences and thus obtain "Blake Shelton" as well as "Austin (song)" from this example. We reduce the number of candidates because many irrelevant entities linked by "austin" do not occur in returned sentences. In addition, as previous methods generate candidates by searching entities without the query information, "austin" can be linked to "Austin, Texas" with much higher static link-probability than all other senses of "austin". However, the number of returned sentences that contain "Austin, Texas" is close to the number of sentences that contain "Austin (song)" in our system. We show another example with the query "her film" in Table 2 . In this sentence, "Her", "romantic", "science fiction", "comedy-drama" and "Spike Jonze" are annotated to corresponding en-tities. As "Her" is annotated to "Her (film)" by humans in this example, we have strong evidence to annotate it even if it is usually used as a possessive pronoun with very low static link-probability.[ [Her (film) We obtain the anchors as well as corresponding entities and map them to the query after searching similar sentences. Then we build a regression based framework to rank the candidates. We use a rich set of features, such as link-probability, context-matching, word embeddings, and relatedness among candidate entities as well as their related entities. We evaluate our method on the ERD14 and GERDAQ datasets. Experimental results show that our method outperforms state-ofthe-art systems and yields 75.0% and 56.9% in terms of F1 metric on the ERD14 dataset and the GERDAQ dataset respectively.
0
The resolution of lexical ambiguity in language is essential to true language understanding. It has been shown to improve the performance of such applications as statistical machine translation (Chan et al., 2007; Carpuat and Wu, 2007) , and crosslanguage information retrieval and question answering (Resnik, 2006) . Word sense induction (WSI) is the task of automatically grouping the target word's contexts of occurrence into clusters corresponding to different senses. Unlike word sense disambiguation (WSD), it does not rely on a pre-existing set of senses.Much of the classic bottom-up WSI and thesaurus construction work -as well as many successful systems from the recent SemEval competitions -have explicitly avoided the use of existing knowledge sources, instead representing the disambiguating context using bag-of-words (BOW) or syntactic features (Schütze, 1998; Pantel and Lin, 2002; Dorow and Widdows, 2003; Pedersen, 2010; Kern et al., 2010) .This particularly concerns the attempts to integrate the information about semantic classes of words present in the sense-selecting contexts. Semantic roles (such as those found in PropBank (Palmer et al., 2005) or FrameNet (Ruppenhofer et al., 2006) ) tend to generalize poorly across the vocabulary. Lexical ontologies (and WordNet (Fellbaum, 2010) in particular) are not always empirically grounded in language use and often do not represent the relevant semantic distinctions. Very often, some parts of the ontology are better suited for a particular disambiguation task than others. In this work, we assume that features based on such ontology segments would correlate well with other context features.Consider, for example, the expression "to deny the visa". When choosing between two senses of 'deny' ('refuse to grant' vs. 'declare untrue'), we would like our lexical ontology to place 'visa' in the same subtree as approval, request, recognition, commendation, endorsement, etc. And indeed, WordNet places all of these, including 'visa', under the same node. However, their least common subsumer is 'message, content, subject matter, substance', which also subsumes 'statement', 'significance', etc., which would activate the other sense of 'deny'. In other words, the distinctions made at this level in the nominal hierarchy in WordNet would not be useful in disambiguating the verb 'deny', unless our model can select the appropriate nodes of the subtree rooted at the synset 'message, content, subject matter, substance'. Our model should also infer the associations between such nodes and other context relevant features that select the sense 'refuse to grant' (such as the presence of ditransitive constructions, etc.) In this paper, we use the topic modeling approach to identify ontology-derived features that can prove useful for sense induction. Bayesian approaches to sense induction have recently been shown to perform well in the WSI task. In particular, Brody and Lapata (2009) have adapted the Latent Dirichlet Allocation (LDA) generative topic model to WSI by treating each occurrence context of an ambiguous word as a document, and the derived topics as sense-selecting context patterns represented as collections of features. They applied their model to the SemEval2007 set of ambiguous nouns, beating the best-performing system in its WSI task. Yao and Van Durme (2011) used a non-parametric Bayesian model, the Hierarchical Dirichlet Process (HDP), for the same task and showed that following the same basic assumptions, it performs comparably, with the advantage of avoiding the extra tuning for the number of senses.We investigate the question of how well such models would perform when some knowledge of syntactic structure and semantics is added into the system, in particular, when bag-of-words features are supplemented by the knowledge-enriched syntactic features. We use the SemEval2010 WSI task data for the verbs for evaluation (Manandhar et al., 2010) . This data set choice is motivated by the fact that (1) for verbs, sense-selecting context patterns often most directly depend on the nouns that occur in syntactic dependencies with them, and (2) the nominal parts of WordNet tend to have much cleaner ontological distinctions and property inheritance than, say, the verb synsets, where the subsumption hierarchy is organized according how specific the verb's manner of action is.The choice of the SemEval2010 verb data set was motivated by the fact that SemEval2007 verb data is dominated by the most frequent sense for many target verbs, with 11 out of 65 verbs only having one sense in the combined test and training data.All verbs in the SemEval2010 verb data set have at least two senses in the data provided. The implications of this work are two-fold: (1) we confirm independently on a different data set that parametric and non-parametric models perform comparably, and outperform the current state-of-the-art methods using the baseline bag-of-words feature set (2) we show that integrating populated syntactic and ontology-based features directly into the generative model consistently leads to statistically significant improvement in accuracy. Our system outperforms both the bag-of-words baselines and the best-performing system in the SemEval2010 competition.The remainder of the paper is organized as follows. In Section 2, we review the relevant related work. Sections 3 and 4 give the details on how the models are defined and trained, and describe the incorporated feature classes. Section 5 describes the data used to conduct the experiments. Finally, in Section 6, we describe the evaluation methods and present and discuss the experimental results.
0
Comparable corpora are the main alternative to the use of parallel corpora for the task of bilingual lexicon extraction, particularly in specialized and technical domains for which parallel texts are usually unavailable or difficult to obtain. Although it is easier to build comparable corpora (Talvensaari et al., 2007) , specialized comparable corpora are often of modest size (around 1 million words) in comparison with general-domain comparable corpora (up to 100 million words) (Morin and Hazem, 2016) . The main reason is related to the difficulty to obtain many specialized documents in a language other than English. For example, a single query on the Elsevier portal 1 of documents containing in their title the term "breast cancer" returns 40,000 documents in English, where the same query returns 1,500 documents in French, 693 in Spanish and only 7 in German.The historical context-based approach dedicated to the task of bilingual lexicon extraction from comparable corpora, and also known as the standard approach, relies on the simple observation that a word and its translation tend to appear in the same lexical contexts (Fung, 1995; Rapp, 1999) . In this approach, each word is described by its lexical contexts in both source and target languages, and words in translation relationship should have similar lexical contexts in both languages. To enhance bilingual lexicon induction, recent approaches use more sophisticated techniques such as topic models based on bilingual latent dirichlet allocation (BiLDA) (Vulic and Moens, 2013b; Vulic and Moens, 2013a) or bilingual word embeddings based on neural networks (Gouws et al., 2014; Chandar et al., 2014; Vulic and Moens, 2015; Vulic and Moens, 2016 ) (approaches respectively noted: Gouws, Chandar and BWESG+cos). All these approaches require at least sentence-aligned/document aligned parallel data (BiLDA, Gouws, Chandar) or non-parallel document-aligned data at the topic level (BWESG+cos). Since specialized comparable corpora are of small size, sentence-aligned (document aligned) parallel data are unavailable and nonparallel document-aligned data at the topic level can't be provided since specialized comparable corpora usually deal with one single topic. Based on the recent comparison in (Vulic and Moens, 2015; Vulic and Moens, 2016) where the standard approach (noted in there article as PPMI+cos) performed better in most cases while compared to BiLDA, Gouws and Chandar, and due to the unavailability of non parallel document aligned data at the topic level, we only deal with the standard approach and show at least that our approach improve drastically bilingual terminology extraction while adding well selected external data.The small size of specialized comparable corpora renders unreliable word co-occurrences which are the basis of the standard approach. In this paper, we propose to improve the reliability of word cooccurrences in specialized comparable corpora by adding general-domain data. This idea has already been successfully employed in machine translation task (Moore and Lewis, 2010; Axelrod et al., 2011; Wang et al., 2014, among others) . The approach of using adapted external data, also known as data selection is often applied in Statistical Machine Translation (SMT) to improve the quality of the language and translation models, and hence, to increase the performance of SMT systems. If data selection has become a mainstream in SMT, it is still not the case in the task of bilingual lexicon extraction from specialized comparable corpora. The majority of the studies in this area support the principle that the quality of the comparable corpus is more important than its size and consequently, increasing the size of specialized comparable corpora by adding out-of-domain documents decreases the quality of bilingual lexicons (Li and Gaussier, 2010; Delpech et al., 2012) . This statement remains true as long as the used data is not adapted to the domain. We propose two data selection techniques based on the combination of a specialized comparable corpus with external resources. Our hypothesis is that word co-occurrences learned from a general-domain corpus for general words (as opposed to the terms of the domain) improve the characterization of the specific vocabulary of the corpus (the terms of the domain). By enriching the general words representation in specialized comparable corpora, we improve their characterization and therefore improve the characterization of the terms of the domain for better discrimination.The remainder of this article is organized as follows: Section 2 describes the standard approach to bilingual lexicon extraction from comparable corpora. Section 3 presents previous works related to the improvements of the standard approach for specialized comparable corpora. Section 4 describes our strategies to improve the characterization of lexical contexts. Section 5 presents the different textual resources used for our experiments: the specialized and general comparable corpora, the bilingual dictionary and the terminology reference lists. Section 6 evaluates the influence of using lexical contexts built from general comparable corpora on the quality of bilingual terminology extraction. Section 7 presents our conclusions.
0
Short-texts are abundant on the Web and appear in various different formats such as microblogs (Kwak et al., 2010) , Question and Answer (QA) forums, review sites, Short Message Service (SMS), email, and chat messages (Cong et al., 2008; Thelwall et al., 2010) . Unlike lengthy responses that take time to both compose and to read, short responses have gained popularity particularly in social media contexts. Considering the steady growth of mobile devices that are physically restricted to compact keyboards, which are suboptimal for entering lengthy text inputs, it is safe to predict that the amount of short-texts will continue to grow in the future. Considering the importance and the quantity of the short-texts in various web-related tasks, such as text classification (kun Wang et al., 2012; dos Santos and Gatti, 2014) , and event prediction (Sakaki et al., 2010) , it is important to be able to accurately represent and classify short-texts.Compared to performing text mining on longer texts (Guan et al., 2009; Su et al., 2011; Yogatama and Smith, 2014) , for which dense and diverse feature representations can be created relatively easily, handling of shorter texts poses several challenges. The number of features that are present in a given short-text will be a small fraction of the set of all features that exist in all of the train instances. Moreover, frequency of a feature in a short-text will be small, which makes it difficult to reliably estimate the salience of a feature using term frequency-based methods. This is known as the feature sparseness problem in text classification.Feature sparseness is not unique to shorttext classification but also encountered in crossdomain text classification (Blitzer et al., 2006 (Blitzer et al., , 2007 Bollegala et al., 2014) , where the training and test data are selected from different domains with small intersection of feature spaces. In the domain adaptation (DA) setting, a classifier trained on one domain (source) might be agnostic to the features that are unique to a different domain (target), which results in a feature mismatch problem similar to the feature-sparseness problem discussed above.To address the feature sparseness problem encountered in short-text and cross-domain classification tasks, we propose a novel method that computes related features that can be appended to the feature vectors to reduce the sparsity. Specifically, we decompose a feature-relatedness graph into core-periphery (CP) structures, where a core feature (a vertex) is linked to a set of peripheries (also represented by vertices), indicating the connectivity of the graph. This graph decomposition problem is commonly known as the CPdecomposition (Csermely et al., 2013; Rombach et al., 2017; Masuda, 2018, 2017) .Our proposed CP-decomposition algorithm significantly extends existing CP-decomposition methods in three important ways.• First, existing CP-decomposition methods consider unweighted graphs, whereas edges in feature-relatedness graphs are weighted (possibly nonnegative) real-valued featurerelatedness scores such as positive pointwise mutual information (PPMI). Our proposed CP-decomposition method can operate on edge-weighted graphs.• Second, considering the fact that in text classification a particular periphery can be related to more than one core, we relax the hard assignment constraints on peripheries and allow a particular periphery attach to multiple cores.• Third, prior work on pivot-based crossdomain sentiment classification methods have used features that are frequent in training (source) and test (target) data as expansion candidates to overcome the feature mismatch problem. Inspired by this, we define coreness of a feature as the pointwise mutual information between a feature and the source/target domains. The CPdecomposition algorithm we propose will then compute the set of cores considering both structural properties of the graph as well as the coreness values computed from the train/test data.To perform feature vector expansion, we first construct a feature-relatedness graph, where vertices correspond to features and the weight of the undirected edge connecting two features represent the relatedness between those two features. Different features and relatedness measures can be flexibly used in the proposed graph construction. In our experiments, we use the simple (yet popular and effective) setting of n-gram features as vertices and compute their relatedness using PPMI. We compute the coreness of features as the sum of the two PPMI values between the feature and the source, and the feature and the target domains. 1 Next, CP-decomposition is performed on this feature-relatedness graph to obtain a set of core-periphery structures. We then rank the set of peripheries of a particular core by their PPMI values, and select the top-ranked peripheries as the expansion features of the core. We expand the core features in training and train a logistic regressionbased binary classifier using the expanded feature vectors, and evaluate its performance on the expanded test feature vectors.We evaluate the effectiveness of the proposed method using benchmark datasets for two different tasks: short-text classification and crossdomain sentiment classification. Experimental results on short-text classification show that the proposed method consistently outperforms previously proposed feature expansion-based methods for short-text classification and even some of the sentence embedding learning-based methods. Moreover, the consideration of coreness during the CP-decomposition improves the text classification accuracy. In cross-domain sentiment classification experiments, the proposed method outperforms previously proposed pivot-based methods such as the structural correspondence learning (SCL) (Blitzer et al., 2006) .
0
To automatically find or track the attitudes, feelings and evaluations in texts, opinion mining and sentiment analysis have been extensively studied from different perspectives (Pang and Lee, 2008) . With the ever-growing number of Chinese users (over half a billion users only in mainland China), the amount of web opinions in Chinese is rapidly increasing, and analyzing them is an important task. However, research and resources about the Chinese opinion analysis lag behind those for extensively studied languages, such as English. Therefore, opinion analyzers, which can deal with Chinese web data of a great variety of topics and styles, are especially in great need.To meet this requirement, we introduce a Chinese Evaluative Information Analyzer (CEIA) that can mine a wide variety of evaluative information from Chinese web documents. We use evaluative information as a unifying term for the information concerning attitudes, opinions and sentiments, and so on, which is useful to provide a view of evaluation.The system automatically analyzes Chinese evaluative information through the following processes:(1) extracts evaluative expressions;(2) identifies evaluation holders;(3) extracts evaluation targets; (4) determinates evaluation types; (5) determinates the sentiment polarities of the evaluative expressions.Firstly, CEIA can analyze a more diverse and richer set of evaluative information than the previous studies for Chinese. The previous research on Chinese opinion analysis focuses on subjective expressions (opinionated sentences) (Liu, 2010) , as in the Multilingual Opinion Analysis Task (MOAT) of NTCIR (Seki et al., 2010) . However, some objective expressions that describe positive or negative facts are also informative in that they express some kinds of evaluations. Also, requests are some kinds of representations of opinions or attitudes. Consider the following sentences, 1. Many people are using mobile phone A. 2. The users hope company A will offer them a security lock function.The sentence 1 suggests that "mobile phone A" is popular and has been chosen by many people. The sentence 2 claims that the company A does not offer a security lock function now and the user request the company to offer it. In some sense, this sentence also includes the evaluation or unsatisfied feelings of the users. We want to consider such cases as "implicit" evaluations for "mobile phone A" and "company A", in addition to subjective expressions such as "I love mobile phone A".To the best of our knowledge, this is the first paper that treats the above implicit evaluations in Chinese evaluative information analysis. Implicit evaluations have been considered by Nakagawa et al. (2008) for Japanese. They presented the study about extracting subjective and objective Japanese evaluative expressions from the web and their work was used in WISDOM system (Akamine et al., 2010) 1 , and shown to be useful to support users' judgement of information credibility. Inspired by their work, we adopt the task definition and expand the research scope of Chinese evaluation information analysis.Secondly, CEIA can deal with the data in diverse topics and writing styles. The existing studies about Chinese opinion analysis are domain-limited. For example, Chinese Opinion Analysis Evaluation (COAE) (Zhao et al., 2008) mainly deals with opinion analysis of reviews. MOAT (Seki et al., 2010 ) deals with the analysis of news articles, which are written in a formal writing style. To make our system more robust to the web data of a great variety of topics and styles, we constructed an original annotated Chinese evaluative information corpus whose sentences are extracted from web pages of wide range of topics and styles. CEIA consists of many machine learning modules such as CRFs and SVMs and the corpus was used to train these modules, resulting in a robust evaluative information analyzer. To achieve high system performance is also a primal goal of evaluative information analysis. In this work, we introduce new features to improve the performance. Specifically, syntactic dependency features, semantic class features and distance features are added to the baseline models. To demonstrate the performance of our system and the effectiveness of our new features, we conducted a series of experiments on the Chinese evaluative information corpus.
0
Automatic semantic interpretations of natural language text rely on (1) semantic theories that capture the subtleties employed by human communications; (2) lexico-semantic resources that encode various forms of semantic knowledge; and (3) computational methods that model the selection of the optimal interpretation derived from the textual data. Two of the SemEval 2007 tasks, namely Task 4 (Classification of Semantic Relations between Nominals)and Task 8 (Metonymy Resolution) employed distinct theories for the interpretation of their corresponding semantic phenomena, but, nevertheless, they also shared several lexico-semantic resources, and, furthermore, both these tasks could have been cast as classification problems, in vein with most of the recent work in computational semantic processing. Based on this observation, we have designed and implemented a semantic architecture that was used in both tasks. In Section 2 of this paper we give a brief description of the semantic theories corresponding to each of the two tasks, while in Section 3 we detail the semantic architecture. Section 4 describes the experimental results and evaluation.We have used three lexico-semantic resources: (i) the WordNet lexico-semantic database; (ii) VerbNet; and (iii) the Lexical Conceptual Structure (LCS) database. Used only by Task 4, WordNet is a lexicosemantic database created at Princeton University 1 (Fellbaum, 1998), which encodes a vast majority of the English nouns, verbs, adjectives and adverbs, and groups synonym words into synsets. VerbNet 2 is a broad-coverage, comprehensive verb lexicon created at University of Pennsylvania, compatible with WordNet, but with explicitly stated syntactic and semantic information, using Levin verb classes (Levin, 1993) to systematically construct lexical entities. Classes are hierarchically organized and each class in the hierarchy has its corresponding syntactic frames, semantic predicates and a list of typical verb arguments. The Lexical Conceptual Structure (Traum and Habash, 2000) is a compositional abstraction with language-independent properties. An LCS is a directed graph with a root. Each node is associated with certain information, including a type, a primitive and a field. An LCS captures the semanticsEarplugs relieve the discomfort from traveling with a cold allergy or sinus condition. 2. INSTRUMENT-AGENCY The judge hesitates, gavel poised, shooting them a warning look.
0
Neural machine translation (NMT) performs endto-end translation based on a simple encoderdecoder model (Kalchbrenner and Blunsom, 2013; Sutskever et al., 2014; Cho et al., 2014b) and has now overtaken the classical, complex statistical machine translation (SMT) in terms of performance and simplicity (Sennrich et al., 2016; Luong and Manning, 2016; Cromieres et al., 2016; Neubig, 2016) . In NMT, an encoder first maps a source sequence into vector representations and * Contribution during internship at Microsoft Research. a decoder then maps the vectors into a target sequence ( § 2). This simple framework allows researchers to incorporate the structure of the source sentence as in SMT by leveraging various architectures as the encoder (Kalchbrenner and Blunsom, 2013; Sutskever et al., 2014; Cho et al., 2014b; Eriguchi et al., 2016b) . Most of the NMT models, however, still rely on a sequential decoder based on a recurrent neural network (RNN) due to the difficulty in capturing the structure of a target sentence that is unseen during translation. With the sequential decoder, however, there are two problems to be solved. First, it is difficult to model long-distance dependencies (Bahdanau et al., 2015) . A hidden state h t in an RNN is only conditioned by its previous output y t−1 , previous hidden state h t−1 , and current input x t . This makes it difficult to capture the dependencies between an older output y t−N if they are too far from the current output. This problem can become more serious when the target sequence becomes longer. For example, in Figure 1 , when we translate the English sentence into the Japanese one, after the decoder predicts the content word "帰っ (go back)", it has to predict four function words "て (suffix)", "しまい (perfect tense)", "たい (desire)", and "と (to)" before predicting the next content word "思っ (feel)". In such a case, the decoder is required to capture the longer dependencies in a target sentence.Another problem with the sequential decoder is that it is expected to cover multiple possible word orders simply by memorizing the local word se-quences in the limited training data. This problem can be more serious in free word-order languages such as Czech, German, Japanese, and Turkish. In the case of the example in Figure 1 , the order of the phrase "早く (early)" and the phrase "家へ (to home)" is flexible. This means that simply memorizing the word order in training data is not enough to train a model that can assign a high probability to a correct sentence regardless of its word order.In the past, chunks (or phrases) were utilized to handle the above problems in statistical machine translation (SMT) (Watanabe et al., 2003; Koehn et al., 2003) and in example-based machine translation (EBMT) (Kim et al., 2010) . By using a chunk rather than a word as the basic translation unit, one can treat a sentence as a shorter sequence. This makes it easy to capture the longer dependencies in a target sentence. The order of words in a chunk is relatively fixed while that in a sentence is much more flexible. Thus, modeling intra-chunk (local) word orders and inter-chunk (global) dependencies independently can help capture the difference of the flexibility between the word order and the chunk order in free word-order languages.In this paper, we refine the original RNN decoder to consider chunk information in NMT. We propose three novel NMT models that capture and utilize the chunk structure in the target language ( § 3). Our focus is the hierarchical structure of a sentence: each sentence consists of chunks, and each chunk consists of words. To encourage an NMT model to capture the hierarchical structure, we start from a hierarchical RNN that consists of a chunk-level decoder and a word-level decoder (Model 1). Then, we improve the word-level decoder by introducing inter-chunk connections to capture the interaction between chunks (Model 2). Finally, we introduce a feedback mechanism to the chunk-level decoder to enhance the memory capacity of previous outputs (Model 3).We evaluate the three models on the WAT '16 English-to-Japanese translation task ( § 4). The experimental results show that our best model outperforms the best single NMT model reported in WAT '16 (Eriguchi et al., 2016b) .Our contributions are twofold: (1) chunk information is introduced into NMT to improve translation performance, and (2) a novel hierarchical decoder is devised to model the properties of chunk structure in the encoder-decoder framework.
0
From business people to the everyday person, email plays an increasingly central role in a modern lifestyle. With this shift, e-mail users desire improved tools to help process, search, and organize the information present in their ever-expanding inboxes. A system that ranks e-mails according to the From: Henry Hutchins <hhutchins@innovative.company.com> To: Sara Smith; Joe Johnson; William Woolings Subject: meeting with prospective customers Hi All, I'd like to remind all of you that the group from GRTY will be visiting us next Friday at 4:30 p.m. The schedule is: + 9:30 a.m. Informal Breakfast and Discussion in Cafeteria + 10:30 a.m. Company Overview + 11:00 a.m. Individual Meetings (Continue Over Lunch) + 2:00 p.m. Tour of Facilities + 3:00 p.m. Sales Pitch In order to have this go off smoothly, I would like to practice the presentation well in advance. As a result, I will need each of your parts by Wednesday. Keep up the good work! -Henry Figure 1 : An E-mail with Action-Item (italics added). likelihood of containing "to-do" or action-items can alleviate a user's time burden and is the subject of ongoing research throughout the literature.In particular, an e-mail user may not always process all e-mails, but even when one does, some emails are likely to be of greater response urgency than others. These messages often contain actionitems. Thus, while importance and urgency are not equal to action-item content, an effective action-item detection system can form one prominent subcomponent in a larger prioritization system.Action-item detection differs from standard text classification in two important ways. First, the user is interested both in detecting whether an email contains action-items and in locating exactly where these action-item requests are contained within the email body. Second, action-item detection attempts to recover the sender's intent -whether she means to elicit response or action on the part of the receiver.In this paper, we focus on the primary problem of presenting e-mails in a ranked order according to their likelihood of containing an action-item. Since action-items typically consist of a short text spana phrase, sentence, or small passage -supervised input to a learning system can either come at the document-level where an e-mail is labeled yes/no as to whether it contains an action-item or at the sentence-level where each span that is an actionitem is explicitly identified. Then, a corresponding document-level classifier or aggregated predictions from a sentence-level classifier can be used to estimate the overall likelihood for the e-mail.Rather than commit to either view, we use a combination technique to capture the information each viewpoint has to offer on the current example. The STRIVE approach has been shown to provide robust combinations of heterogeneous models for standard topic classification by capturing areas of high and low reliability via the use of reliability indicators.However, using STRIVE in order to produce improved rankings has not been previously studied. Furthermore, while they introduce some reliability indicators that are general for text classification problems as well as ones specifically tied to naïve Bayes models, they do not address other classification models. We introduce a series of reliability indicators connected to areas of high/low reliability in kNN, SVMs, and decision trees to allow the combination model to include such factors as the sparseness of training example neighbors around the current example being classified. In addition, we provide a more formal motivation for the role these variables play in the resulting metaclassification model. Empirical evidence demonstrates that the resulting approach yields a context-sensitive combination model that improves the quality of rankings generated as well as reducing the variance of the ranking quality across training splits.
0
Named entity recognition (NER) is a well known task in natural language processing. It aims to detect spans of text associated with known entities. Initially, much work focused on detecting persons, organizations, and locations (Grishman and Sundheim, 1996; Tjong Kim Sang and De Meulder, 2003) . However, this limited approach is not suitable for every domain, thus leading to research in domain-specific NER. For example, in the biomedical domain, a number of works have addressed entities such as genes, proteins, diseases (Hu and Verberne, 2020) , cell types (Settles, 2004) , chemicals (Gonzalez-Agirre et al., 2019; Ion et al., 2019) . Similarly, in the legal domain additional classes are employed such as money value (Glaser et al., 2018) , legal reference (Landthaler et al., 2016; , judge, and lawyer (Leitner et al., 2019) .In the context of "The 16th International Workshop on Semantic Evaluation (SemEval 2022)" 1 , the task number 11 "Multilingual Complex Named Entity Recognition (MultiCoNER)" 2 (Malmasi et al., 2022b) required participants to build a NER system able to recognize complex entities in 11 languages: Bangla, Chinese, Dutch, English, Farsi, German, Hindi, Korean, Russian, Spanish, and Turkish. In addition, a multilingual track and a code-mixed track were available. The task focused on 6 entity types: person, location, group, corporation, product and creative work.As noted by Ashwini and Choi (2014) , nontraditional entities can pose a challenge for NER systems. This happens because datasets are harder to build and certain entities (such as creative works) are updated more frequently than traditional ones (persons, locations). Furthermore, traditional entities tend to occur as noun phrases, while the newly proposed entities (for the purposes of the task) may be linguistically complex (complex noun phrases, gerunds, infinitives or full clauses). An interesting result was provided by Aguilar et al. (2017) , where the top system from WNUT 2017 achieved only 8% recall when dealing with creative works. This paper describes a system for complex NER in a multilingual context, developed at the Research Institute for Artificial Intelligence of the Romanian Academy (RACAI), that participated in the Multi-CoNER task. The system employs a new artificial neural network layer trying to mimic the biological process of lateral inhibition (Cohen, 2011) . In various regions of the brain, excited neurons can reduce the activity of other neighbouring neurons. In the visual cortex this process may account for an increased perception in low-lighting conditions. Thus, intuitively the newly proposed system may better focus on subtle details present in the data and the language model. The paper is structured as follows: Section 2 presents related work, Section 3 describes the dataset and pre-processing operations used, Section 4 describes the method used with the system architecture in Section 4.1 and performed experiments in Section 4.2. The results are given in Section 5 and finally, conclusions and future work are available in Section 6.
0
When designing a conventional non-neural parser substantial effort is required to design a powerful feature extraction function. Such a function (McDonald et al., 2005; Zhang and Nivre, 2011, among others) is constructed so that it captures as much structural context as possible. The context allows the parser to make well-informed decisions. 1 It is encoded in features built from partial subtrees and explicitly used by the models.Recently, Kiperwasser and Goldberg (2016, K&G) showed that the conventional feature extraction functions can be replaced by modeling the left-and right-context of each word with BiLSTMs (Hochreiter and Schmidhuber, 1997; Graves and Schmidhuber, 2005) . Although the proposed models do not use any conventional structural features they achieve state-of-the-art performance. The authors suggested that it is because the BiLSTM encoding is able to estimate the missing information from the given features and did not explore this issue further.Since the introduction of the K&G architecture BiLSTM-based parsers have become standard in the field. 2 Yet, it is an open question how much conventional structural context the BiLSTMs representations actually are able to capture implicitly. Small architectures that ignore the structural context are attractive since they come with lower time complexity. But to build such architectures it is important to investigate to what extent the explicit structural information is redundant. For example, K&G also proposed an extended feature set derived from structural context, which has subsequently been re-implemented and used by others without questioning its utility.Inspired by recent work (Gaddy et al., 2018 ) on constituency parsing we aim at understanding what type of information is captured by the internal representations of BiLSTM-based dependency parsers and how it translates into their impressive accuracy. As our starting point we take the K&G architecture and extend it with a secondorder decoder. 3 We perform systematic analyses on nine languages using two different architectures (transition-based and graph-based) across two dimensions: with and without BiLSTM representations, and with and without features drawn from structural context. We demonstrate that structural features are useful for neural dependency parsers but they become redundant when BiLSTMs are used (Section 4). It is because the BiLSTM representations trained together with dependency parsers capture a significant amount of complex syntactic relations (Section 5.1). We then carry out an extensive investigation of information flow in the parsing architectures and find that the implicit structural context is not only present in the BiLSTM-based parsing models, but also more diverse than when encoded in explicit structural features (Section 5.2). Finally, we present results on ablated models to demonstrate the influence of structural information implicitly encoded in BiLSTM representations on the final parsing accuracy (Section 5.3).
0
The need to determine semantic relatedness or its inverse, semantic distance, between two lexically expressed concepts is a problem that pervades much of natural language processing. Measures of relatedness or distance are used in such applications as word sense disambiguation, determining the structure of texts, text summarization and annotation, information extraction and retrieval, automatic indexing, lexical selection, and the automatic correction of word errors in text. It's important to note that semantic relatedness is a more general concept than similarity; similar entities are semantically related by virtue of their similarity (bank-trust company), but dissimilar entities may also be semantically related by lexical relationships such as meronymy (car-wheel) and antonymy (hot-cold), or just by any kind of functional relationship or frequent association (pencil-paper, penguin-Antarctica, rain-flood) . Computational applications typically require relatedness rather than just similarity; for example, money and river are cues to the in-context meaning of bank that are just as good as trust company.However, it is frequently unclear how to assess the relative merits of the many competing approaches that have been proposed for determining lexical semantic relatedness. Given a measure of relatedness, how can we tell whether it is a good one or a poor one? Given two measures, how can we tell whether one is better than the other, and under what conditions it is better? And what is it that makes some measures better than others? Our purpose in this paper is to compare the performance of a number of measures of semantic relatedness that have been proposed for use in applications in natural language processing and information retrieval.In the literature related to this topic, at least three different terms are used by different authors or sometimes interchangeably by the same authors: semantic relatedness, similarity, and semantic distance. Resnik (1995) attempts to demonstrate the distinction between the first two by way of example. "Cars and gasoline", he writes, "would seem to be more closely related than, say, cars and bicycles, but the latter pair are certainly more similar." Similarity is thus a special case of semantic relatedness, and we adopt this perspective in this paper. Among other relationships that the notion of relatedness encompasses are the various kinds of meronymy, antonymy, functional association, and other "non-classical relations" (Morris and Hirst 2004) .The term semantic distance may cause even more confusion, as it can be used when talking about either just similarity or relatedness in general. Two concepts are "close" to one another if their similarity or their relatedness is high, and otherwise they are "distant". Most of the time, these two uses are consistent with one another, but not always; antonymous concepts are dissimilar and hence distant in one sense, and yet are strongly related semantically and hence close in the other sense. We would thus have very much preferred to be able to adhere to the view of semantic distance as the inverse of semantic relatedness, not merely of similarity, in the present paper. Unfortunately, because of the sheer number of methods measuring similarity, as well as those measuring distance as the "opposite" of similarity, this would have made for an awkward presentation. Therefore, we have to ask the reader to rely on context when interpreting what exactly the expressions semantic distance, semantically distant, and semantically close mean in each particular case.Various approaches presented below speak of concepts and words. As a means of acknowledging the polysemy of language, in this paper the term concept will refer to a particular sense of a given word. We want to be very clear that, throughout this paper, when we say that two words are "similar", this is a short way of saying that they denote similar concepts; we are not talking about similarity of distributional or co-occurrence behavior of the words, for which the term word similarity has also been used (Dagan 2000; Dagan, Lee, and Pereira 1999) . While similarity of denotation might be inferred from similarity of distributional or co-occurrence behavior (Dagan 2000; Weeds 2003) , the two are distinct ideas. We return to the relationship between them in Section 6.2.When we refer to hierarchies and networks of concepts, we will use both the terms link and edge to refer to the relationships between nodes; we prefer the former term when our view emphasizes the taxonomic aspect or the meaning of the network, and the latter when our view emphasizes algorithmic or graph-theoretic aspects. In running text, examples of concepts are typeset in sans-serif font, whereas examples of words are given in italics; in formulas, concepts and words will usually be denoted by c and w, with various subscripts. For the sake of uniformity of presentation, we have taken the liberty of altering the original notation accordingly in some other authors' formulas.
0
Recent Transformer-based language representation models (LRMs) -such as BERT and GPT-2 (Devlin et al., 2019; Radford et al., 2019) -show impressive results on practical text analysis tasks. But do these models have access to complex linguistic notions? The results in this domain are less clearas well as ways to best approach this question.Instead of asking whether LRMs encode fragments of current linguistic theory, we will directly compare metrics derived from LRMs to corresponding human judgments obtained in psycholinguistic experiments. The motivation for this is twofold. First, linguistic theories can be inaccurate -so, evaluating a model with respect to predictions of such theories is not informative about the model performance. Second, robust abstract theoretical notions rarely correspond to robust judgments in * Equal contribution. Accepted to ACL main conference.humans, and 'theoretical' and 'perceived' versions of the same phenomenon can be significantly different (for instance, see Geurts 2003 on inference judgments; discussed in Section 2). If this is something that LRMs inherit through training on humanproduced texts, this makes LRMs an attractive possible component in an experimental pipeline, serving as a source of empirical predictions about human linguistic behaviour (Baroni, 2021; Linzen and Baroni, 2021) .As a case study, we focus on polarity: a complex property of sentences at the intersection of grammar and semantics. We tackle polarity via the distribution of items that are sensitive to itnamely, so-called negative polarity items (NPIs) like English any. As a basic illustration of NPI sensitivity to polarity, consider a pair of sentences in (1) (* = ungrammaticality):(1) a. Mary didn't buy any books. b. *Mary bought any books.(1-a) is a negative sentence (has negative polarity), and any is grammatical in it. (1-b) is an affirmative sentence (has positive polarity) and any in this sentence is grammatically degraded compared to (1-a). Apart from this paradigmatic contrast, as we discuss below, polarity contrasts are expressed in a variety of ways and are tied to semantics. As a proxy for a grammaticality measure, we will use the probability of any in the masked token position (in BERT) (following Goldberg 2019; Warstadt et al. 2019 a.o.) and perplexity increase when adding any to a sentence (in GPT-2). The differences in the metrics for the two different models stem from the differences in their architecture and training objectives. For all experiments, we use non-fine-tuned pre-trained LRMs. For this, we introduce our ANY dataset, which combines natural and synthetic data.We find high levels of alignment between results of psycholinguistic experiments on monotonicity and NPIs, on the one hand -and our LRM-derived results, on the other hand. Furthermore, show how LRMs can be used to make new predictions about NPIs in contexts with different numerals and confirm these predictions in a psycholinguistic experiment.This case study contributes to the complement of the 'interpretability of neural LRMs' research agenda: we can ask not only what linguistic tasks tell us about LRMs, but also what these models can help us find out about natural language (see Baroni 2021; Linzen and Baroni 2021 for a discussion along these lines).The paper is structured as follows. First, in section 2, we set up the context for our study: we describe the background in theoretical and experimental linguistics in the domains relevant for our discussion. Section 3 discusses previous work on NPIs and polarity in computational linguistics. Section 4 contains the description of our experimental method. First, we introduce our ANY dataset; then, we describe the tests and metrics we use with BERT and with GPT-2 given our dataset. Section 5 discusses our results. In section 6, we go beyond state-of-the-art knowledge in experimental semantics and pragmatics and study the effect of the numeral on NPI acceptability -first, we do a BERT study and then confirm the results on human participants. Section 7 concludes: we propose directions for future work aligning experimental studies of language in humans and LRMs.
0
When Brown and colleagues introduced statistical machine translation in the early 1990s, their key insight -harkening back to Weaver in the late 1940swas that translation could be viewed as an instance of noisy channel modeling (Brown et al., 1990) . They introduced a now standard decomposition that distinguishes modeling sentences in the target language (language models) from modeling the relationship between source and target language (translation models). Today, virtually all statistical translation systems seek the best hypothesis e for a given input f in the source language, according tô e = arg max e P r(e|f )An exception is the translation of speech recognition output, where the acoustic signal generally underdetermines the choice of source word sequence f . There, Bertoldi and others have recently found that, rather than translating a single-best transcription f , it is advantageous to allow the MT decoder to consider all possibilities for f by encoding the alternatives compactly as a confusion network or lattice Bertoldi and Federico, 2005; Koehn et al., 2007) . Why, however, should this advantage be limited to translation from spoken input? Even for text, there are often multiple ways to derive a sequence of words from the input string. Segmentation of Chinese, decompounding in German, morphological analysis for Arabic -across a wide range of source languages, ambiguity in the input gives rise to multiple possibilities for the source word sequence. Nonetheless, state-of-the-art systems commonly identify a single analysis f during a preprocessing step, and decode according to the decision rule in (1).In this paper, we go beyond speech translation by showing that lattice decoding can also yield improvements for text by preserving alternative analyses of the input. In addition, we generalize lattice decoding algorithmically, extending it for the first time to hierarchical phrase-based translation (Chiang, 2005; Chiang, 2007) .Formally, the approach we take can be thought of as a "noisier channel", where an observed signal o gives rise to a set of source-language strings f ∈ F Following Och and Ney (2002) , we use the maximum entropy framework (Berger et al., 1996) to directly model the posterior P r(e, f |o) with parameters tuned to minimize a loss function representing the quality only of the resulting translations. Thus, we make use of the following general decision rule:e = arg max e max f ∈F (o) M m=1 λ m φ m (e, f , o) (5)In principle, one could decode according to (2) simply by enumerating and decoding each f ∈ F(o); however, for any interestingly large F(o) this will be impractical. We assume that for many interesting cases of F(o), there will be identical substrings that express the same content, and therefore a lattice representation is appropriate.In Section 2, we discuss decoding with this model in general, and then show how two classes of translation models can easily be adapted for lattice translation; we achieve a unified treatment of finite-state and hierarchical phrase-based models by treating lattices as a subcase of weighted finite state automata (FSAs). In Section 3, we identify and solve issues that arise with reordering in non-linear FSAs, i.e. FSAs where every path does not pass through every node. Section 4 presents two applications of the noisier channel paradigm, demonstrating substantial performance gains in Arabic-English and Chinese-English translation. In Section 5 we discuss relevant prior work, and we conclude in Section 6.
0
Recently there has been considerable interest in using active learning (AL) to reduce HLT annotation burdens. Actively sampled data can have different characteristics than passively sampled data and therefore, this paper proposes modifying algorithms used to infer models during AL. Since most AL research assumes the same learning algorithms will be used during AL as during passive learning 1 (PL), this paper opens up a new thread of AL research that accounts for the differences between passively and actively sampled data.The specific case focused on in this paper is that of AL with SVMs (AL-SVM) for imbalanced datasets 2 . Collectively, the factors: interest in AL, widespread class imbalance for many HLT tasks, interest in using SVMs, and PL research showing that SVM performance can be improved substantially by addressing imbalance, indicate the importance of the case of AL with SVMs with imbalanced data.Extensive PL research has shown that learning algorithms' performance degrades for imbalanced datasets and techniques have been developed that prevent this degradation. However, to date, relatively little work has addressed imbalance during AL (see Section 2). In contrast to previous work, this paper advocates that the AL scenario brings out the need to modify PL approaches to dealing with imbalance. In particular, a new method is developed for cost-weighted SVMs that estimates a cost model based on overall corpus imbalance rather than the imbalance in the so far labeled training data. Section 2 discusses related work, Section 3 discusses the experimental setup, Section 4 presents the new method called InitPA, Section 5 evaluates InitPA, and Section 6 concludes.
0
Are gender biases present in our judicial system, and can machine learning detect them? Drawing on the idea that text can provide insight into human psychology (Jakiela and Ozier, 2019), we look at gender-stereotyped language in case law as a proxy for bias in our judicial system. Unfortunately, previous NLP work in bias detection is insufficient to robustly determine bias in our database (Zhang et al., 2019) . We show that previous bias detection methods all share a common flaw: these algorithms rely on groups of words to represent a potential bias (e.g., 'salary,' 'job,' and 'boss' to represent employment as a potential bias against women) that are not standardized. This lack of standardization is flawed in three main ways. First, these word lists are built by the researchers with little explanation and are susceptible to researchers' own implicit biases. Consequently, the words within the word list might not truly describe the bias as it exists in the text. Second, the same bias theme (e.g., 'employment') often has different word lists in different papers. Inconsistent word lists lead to varied results. As we show, using two different researcher's word lists to represent a bias on a single database can produce almost opposite results. Third, there is little discussion about the method of choosing words to represent specific biases. It is therefore difficult to reproduce or extend existing research on bias detection. In order to search meaningfully for gender bias within our judicial system, we propose two methods for automatically creating word lists to represent biases in text. We find that our methods outperform existing bias detection methods and we employ our new methods to identify gender bias in case law. We find that this bias exists. Finally, we map gender bias's progress over time and find that bias against women in case law decreases at about the same rate, at the same time, that women enter the workforce in the last 100 years.Tel Aviv, Israel noabakergillis@gmail.com
0
A real world event that has an associated probability of causing damage, injury, liability, loss or any other negative impact is termed as a risk (Lu et al., 2009; Slywotzky and Drzik, 2005; Beasley et al., 2005; Lu et al., 2009) . Organizations are always on the look out for information related to such events caused by internal and external vulnerabilities such that the possible negative impacts may be avoided through preemptive action. Sources of risk can be many. The difficulty of risk identification arises from the diversity of the sources. Risks can arise from uncertainty in financial markets (Leidner and Schilder, 2010; Ykhlef and Algawiaz, 2014) , industrial processes or due to project failures. Unexpected events like natural disasters, legal issues, deliberate attacks from adversaries or certain competitor moves can all lead to situations that can impact an organization and hence can be termed as risks.Generally, a risk has the following characteristics: The risk type R T or a name for the description of the risk that characterizes the nature of the adversarial potential, The cause R C or the event that may cause the specified risk and the impact R I that deals with the severity of the damage caused once it materialize.Like all expert-driven activities that involve knowledge about handling uncertainties and predictive capabilities, risk analysis is a complex task that requires expertise that is acquired with experience. It is difficult to document. Besides, experts differ in their opinions. Sifting through a large number of such analyst reports and summarizing them is a tedious activity (Kogan et al., 2009) . In this work, we present text mining techniques that can analyze large volumes of analyst reports to automatically extract risk statements, aggregate them and summarize them into risks of various categories.As mentioned earlier, experts predict risks as probable future events that can impact business outcomes. The proposed methods employ machine learning based techniques to learn linguistic features and their dependencies from labeled samples of risk statements. The learned classifiers are applied to input text, wherein every sentence in the text is subjected to binary classification as "risk" or "not a risk".The salient contributions of this demonstration are as follows: The overall architecture of the risk classification and analysis framework is depicted in Figure 1 . The proposed architecture has four primary modules: a) The Linguistic pre-processing unit b) Feature extraction unit c) Risk classifier unit and d) the Risk analysis unit. The input text is first passed to the preprocessing unit that removes html tags, and foreign language characters from the text. The preprocessed text is then passed to the Stanford parts-of-speech(POS) tagger and parser to label each word with their corresponding POS and to extract different dependency relations within the sentences. From the output of the POS tagger, root verbs are extracted and passed to an English morphological analyzer to identify the tense, aspect and modality of the root verb.1.The syntactically analyzed text is then passed to the feature extraction unit. The features considered can be broadly classified into three types a) Future timing in texts, b) Uncertainty in texts and c) traditional linguistic features.Future timing refers to the expressions that indicate (possible) upcoming events or states. For instance, the verb "expects" in the sentence Testing of OCR division is expecting an overall fall in performance in the next few months, indicate future timing.Uncertainty mainly "concerned with the speaker's assumptions, or assessment of possibilities, and, in most cases, it indicates the speaker's confidence or lack of confidence in the truth of the proposition expected" (Coates, 1987) . Various levels of uncertainty can be inferred from the expression. As a preliminary study, we have used only the presence of epistemic modal expressions like, modal auxiliaries, epistemic lexical verbs, adverb, adjectives and nouns to determine uncertainty in a text (Coates, 1987) .In traditional linguistic features we have considered N-gram counts (N), POS features(POS), Dependency features (D) that includes dependency length, and occurrence of adverbial clause modifier, auxiliary, negation modifier, marker, referent, open clausal complement, clausal complement, expletive, coordination, passive auxiliary, nominal subject, direct object, copula, and conjunct.
0
The goal of capturing structured relational knowledge about lexical terms has been the motivating force underlying many projects in lexical acquisition, information extraction, and the construction of semantic taxonomies. Broad-coverage semantic taxonomies such as WordNet (Fellbaum, 1998) and CYC (Lenat, 1995) have been constructed by hand at great cost; while a crucial source of knowledge about the relations between words, these taxonomies still suffer from sparse coverage.Many algorithms with the potential for automatically extending lexical resources have been proposed, including work in lexical acquisition (Riloff and Shepherd, 1997; Roark and Charniak, 1998) and in discovering instances, named entities, and alternate glosses (Etzioni et al., 2005; Pasça, 2005 ). Additionally, a wide variety of relationship-specific classifiers have been proposed, including pattern-based classifiers for hyponyms (Hearst, 1992) , meronyms (Girju, 2003) , synonyms (Lin et al., 2003) , a variety of verb relations (Chklovski and Pantel, 2004) , and general purpose analogy relations (Turney et al., 2003) . Such classifiers use hand-written or automaticallyinduced patterns like Such N P y as N P x or N P y like N P x to determine, for example that N P y is a hyponym of N P x (i.e., N P y IS-A N P x ). While such classifiers have achieved some degree of success, they frequently lack the global knowledge necessary to integrate their predictions into a complex taxonomy with multiple relations.Past work on semantic taxonomy induction includes the noun hypernym hierarchy created in (Caraballo, 2001 ), the part-whole taxonomies in (Girju, 2003) , and a great deal of recent work described in (Buitelaar et al., 2005) . Such work has typically either focused on only inferring small taxonomies over a single relation, or as in (Caraballo, 2001 ), has used evidence for multiple relations independently from one another, by for example first focusing strictly on inferring clusters of coordinate terms, and then by inferring hypernyms over those clusters.Another major shortfall in previous techniques for taxonomy induction has been the inability to handle lexical ambiguity. Previous approaches have typically sidestepped the issue of polysemy altogether by making the assumption of only a single sense per word, and inferring taxonomies explicitly over words and not senses. Enforcing a false monosemy has the downside of making potentially erroneous inferences; for example, collapsing the polysemous term Bush into a single sense might lead one to infer by transitivity that a rose bush is a kind of U.S. president.Our approach simultaneously provides a solution to the problems of jointly considering evidence about multiple relationships as well as lexical ambiguity within a single probabilistic framework. The key contribution of this work is to offer a solution to two crucial problems in taxonomy in-duction and hyponym acquisition: the problem of combining heterogenous sources of evidence in a flexible way, and the problem of correctly identifying the appropriate word sense of each new word added to the taxonomy. 1
0
As the field of Natural Language Processing advances, there are increasing demands for more sophisticated applications and richer representations. Abstract Meaning Representations (AMR; Banarescu et al. 2013) , and their more recent crosslingual incarnation as Uniform Meaning Representations (UMR; Van Gysel et al. 2021) , are a response to that demand. AMR/UMRs provide an abstract, directed acyclic graph representation of a complete sentence, focusing on the underlying "who" did "what" to "whom" elements of the events being described. The more information that can be associated with those events, in terms of whether they have been completed, or whether they have achieved their intended results, the better.The increased richness of UMR Tense, Aspect and Modality annotations, as described below, can more clearly identify the completion and achievement of events in a cross-lingual context, provid-ing a firmer baseline for comparing typologically distinct languages. Automating such a complex semantic processing task provides valuable qualitative and temporal crosslingual features that applications like translation models and virtual assistants can utilize to more accurately capture the semantic nuances of events. Given the substantial amounts of English AMR annotation, the question immediately arises of how to efficiently add these new annotation features to pre-existing English AMRs.This paper describes an implementation of an automatic system that relies on VerbNet, a rich lexical resource, as the basis for categorizing event descriptions according to the Aspect guidelines discussed below. Our initial results are quite promising, and there are obvious next steps to take.Failure to detect event nominals made up 45.5% of errors. Table 3 depicts event nominals that SemParse did not detect and Table 4 depicts event nominals that SemParse did detect. Sentence C is notable because it involves a nominal found in a dialogic omission of the main verb. SemParse still fails to identify pleasure as an event in the sentence "It's been a pleasure.", citing it as an attributive argument of the main verb, the event seem-109-1-1.These examples indicate that abstracting away from syntactic cues like having a main verb remains a difficult NLP task. SemParse is trained on Unified PropBank corpora, mapping nominal and adjectival predicates to VerbNet roles. Since VerbNet roles are syntactically defined for verbs, mappings exist for linking sentences like "John has a fear of spiders" to "John fears spiders" and "John is afraid of spiders" (Gung, 2020) . Thus, an event like campaign in Sentence E will not be identified because SemParse currently only identifies event nominals/adjectivals that function as arguments of the main verb of the sentence, contemplated. A human annotator can identify another argument structure where the noun phrase headed by investigation has its own argument roles that could be identified as event nominals, but SemParse identification requires clear sentential structure as input and tends to be more limited to the main verb.Even then, event nominal identification is not guaranteed, given that survivors and genocide in Sentence B go undetected, despite being the direct object of the main verb told.One possibility is that SemParse handles more explicitly deverbal nominals better. In Table 3 , less explicitly deverbal nominals like signature and gesture are undetected. Table 4 shows that the deriving suffix -tion appears to make for more readibly detectable event nominals in decision, opposition, and investigation, all core arguments of their main verb. In F, like gesture, the nominal visit shares an identical form with its verbal lemma, but SemParse identifies visit and not gesture. Notably, the event nominal in F also does not occur as part of the core arguments of the main verb is, and was successfully identified as its own nominal phrase.But SemParse also missed objections in D and agreement in E, both nominals that have a transparently derivative suffix that attaches to the verb lemma. Both of those event nominals occur in adjunct clauses that are not core arguments of the main verb and themselves do not contain a verb.2. Dialogic Sentences: In addition to Sentence C, UMR annotates dialogic sentences like "One last question." as a singular event that labels the adjective: be_last. The current syntactic split of verbs and nominals does not allow AutoAspect to label predicative nominals and adjectives that lack a main verb. Additionally, multi-sentence coreference is common in dialogue, as in the sentences "Is this case likely to strain US-Russian relations? I'm afraid it might.", where an event from the previous clause (strain) becomes elided in a successive clause.3. Present Tense Verbs: Table 5 depicts mislabeled and correctly labeled present tense verbs. Mislabeling present tense verbs as HA-BITUAL was the most common error made by the model aside from the main subtask of event identification, making up 22% of errors. Mislabeling verbs in the present participle form as ACTIVITY was another consistent error, occurring at Step 4. The most common gold label for these erroneous HABITUAL and ACTIVITY labels was PERFORMANCE. For example, in Sentence L, returns is mislabeled as HABITUAL. The future tense verb will spend in the first clause changes the aspect for the successive clauses of L, i.e. the clause "...before he returns home with his wife Sherry".The prevalence of gold PERFORMANCE labels indicates that Steps 3 and 4 are prematurely assigning an aspect and not letting certain present participles continue throughout the sequence and make it all the way to Step 8. However, AutoAspect also correctly labeled some present tense forms, as seen in Sentences K and N. Experimenting with using tense and aspect annotations 8 from the ClearTAC parser resulted in even more false positives for HABITUAL and ACTIVITY. Reducing the number of false positives for HA-BITUAL and ACTIVITY necessitates building a semantic parser that can discern between sentences like L with multi-tense contexts and sentences like M with dialogic contexts.
0
Machine translation evaluation has traditionally focused on one-best translation results because many common use cases (translating a user manual, reading a news article, etc.) require only a single translation. There are, however, many scenarios in which n-best translation can be useful; examples include cross-language information retrieval, where query terms may not match in the single-best output, or language learning, where a learner is interested in whether their translation is acceptable.Optimizing translation systems for such applications might benefit from evaluation measures that focus on choosing among systems based on which produces the best list of translated sentences, what we refer to here for brevity as an n-best list. Often in these n-best scenarios, researchers first select 'good' MT systems (i.e., by BLEU) in the hope that these good systems will also produce good results beyond the top translation candidate. In this paper we test that hypothesis, using a newly available dataset to measure the quality of n-best lists directly.To look at the problem in this way we must first decide what properties of an n-best list we would consider 'good'. In this paper we explore three questions:1. How well does an n-best list include correct translations and rank correct translations above incorrect ones? (Section 3: Headweighted Precision)2. How well does an n-best list rank translations in preference order, with the better (e.g., more commonly used) translations ahead of those that are valid, but less preferred? (Section 4: Preference Correlation)3. How close are all of the translations in an nbest list to one or more reference translations? (Section 5: Unweighted Partial Match)We introduce measures for each of the three questions, using a ranking quality measure already widely used in information retrieval for question 1, correlation measures to address question 2, and variants of BLEU for question 3. In this latter study, we particularly note that n-best evaluation done in this way contrasts with a current standard used for both n-best and 1-best MT evaluation, 1-best single-reference BLEU. However, our purpose is not to argue for a single n-best evaluation measure, but rather to highlight that different measures produce different system rankings, and therefore it is crucial that researchers target weight 私は気分が良くなるだろう。 0.015 私は気分が良くなるでしょう。 0.008 私はいい気分になるだろう。 0.007 気分が良くなるだろう。 0.007 私は気分が良いだろう。 0.006 carefully consider what questions to ask when evaluating systems. The measures we propose are illustrative as answers to our research questions, but are not the only solutions; many others might work. We aim to provide groundwork and encourage future work on the topic. Our investigation is made possible by the recent availability of annotations created for the Duolingo Simultaneous Translation and Paraphrase for Language Education (STAPLE) shared task, which contains an extensive (although not necessarily exhaustive) set of valid translations for each of several thousand "input prompt" sentences (Mayhew et al., 2020) .
0
A text corpus plays a crucial role in many spoken-language applications, such as speech translation and statistical natural language processing. The system's accuracy often depends on whether we can accumulate a large amount and wide variety of text data containing frequent or domain-specific linguistic expressions. However, there are fewer existing spoken-language corpora than there are written-language corpora. To make matters much more difficult, spokenlanguage corpora specific to the systems' domain is often unlikely to even exist. For these reasons, we must make an effort to build a spoken-language corpus in the system's domain. Conventionally, a spoken-language corpus has been built using the following four methods:The text related to the system's domain is copied from existing documents. Electronic data can be also used in some cases. (Hirschman, 1992; Heeman & Allen, 1995; Takezawa, 1999; Allwood et al., 2000) :A scripted, situational dialog is recorded. Then, the recorded dialogs are transcribed. (Kikui et al., 2003) :Two participants chat through their keyboard terminals according to preferences or interests. The chat logs are stored as text data. (Hirasawa et al., 2004) :Given specific conversational scenarios, writers imagine the following scenes and then create sentences that are likely to be uttered.If we can find a lot of text related to the system's domain, (a) is the most suitable method. However, most of the time very little text exists. Additionally, copyright problems can arise. To avoid these problems, method (b) or (c) is usually used. Method (b) approximates the scenes to which the system will be actually applied and produces good quality text. For example, the CALLHOME corpus from the Linguistic Data Consortium was constructed using this approach (CALLHOME, 1996) . However, using method (b) the quantity is apt to be small because it requires at least two people, and it takes a large amount of labour to build a large corpus. Method (c) has the same problem.In contrast to these methods, method (d) reduces the cost of construction. We can create bigger volumes of text using method (d) or a compromise between (c) and (d), in which just one person imaginatively writes chat texts. However, it is difficult to persistently create the variety of expressions available in either method because only one person has a limited imagination.Although combining paraphrases of fragmentary linguistic expressions can create a lot of example sentences in a single sitting, such texts do not accurately reflect the statistics of linguistic phenomena. Moreover, in natural conversation we can not prepare all scenes in advance.To overcome the problems of conventional methods, we propose a method for easily proliferating conversation texts that can reduce costs by providing writers with "germ dialogs". The germ dialogs are short scripted dialogs that enable the writers to easily image a follow-up dialog. This method is an improvement over the creative writing method (d).The remaining part of the paper is organized as follows. Section 2 explains the method of deriving text from germ dialogs. Section 3 describes the corpus built by the proposed method. Section 4 presents evaluations of the proposed method, based on language models made from prepared corpora. Section 5 will describe our conclusions.
0
Large-scale grammar development platforms are expensive and time consuming to produce. As such, a desideratum for the platforms is a broad utilization scope. A grammar development platform should be able to be used to write grammars for a wide variety of languages and a broad range of purposes. In this paper, we report on the Parallel Grammar (ParGram) project (Butt et al., 1999) which uses the XLE parser and grammar development platform (Maxwell and Kaplan, 1993) for six languages: English, French, German, Japanese, Norwegian, and Urdu. All of the grammars use the Lexical-Functional Grammar (LFG) formalism which produces c(onstituent)structures (trees) and f(unctional)-structures (AVMs) as the syntactic analysis.LFG assumes a version of Chomsky's Universal Grammar hypothesis, namely that all languages are structured by similar underlying principles. Within LFG, f-structures are meant to encode a language universal level of analysis, allowing for crosslinguistic parallelism at this level of abstraction. Although the construction of c-structures is governed by general wellformedness principles, this level of analysis encodes language particular differences in linear word order, surface morphological vs. syntactic structures, and constituency.The ParGram project aims to test the LFG formalism for its universality and coverage limitations and to see how far parallelism can be maintained across languages. Where possible, the analyses produced by the grammars for similar constructions in each language are parallel. This has the computational advantage that the grammars can be used in similar applications and that machine translation (Frank, 1999 ) can be simplified.The results of the project to date are encouraging. Despite differences between the languages involved and the aims and backgrounds of the project groups, the ParGram grammars achieve a high level of parallelism. This parallelism applies to the syntactic analyses produced, as well as to grammar development itself: the sharing of templates and feature declarations, the utilization of common techniques, and the transfer of knowledge and technology from one grammar to another. The ability to bundle grammar writing techniques, such as templates, into transferable technology means that new grammars can be bootstrapped in a relatively short amount of time.There are a number of other large-scale grammar projects in existence which we mention briefly here. The LS-GRAM project (Schmidt et al., 1996) , funded by the EU-Commission under LRE (Linguistic Research and Engineering), was concerned with the development of grammatical resources for nine European languages: Danish, Dutch, English, French, German, Greek, Italian, Portuguese, and Spanish. The project started in January 1994 and ended in July 1996. Development of grammatical resources was carried out in the framework of the Advanced Language Engineering Platform (ALEP). The coverage of the grammars implemented in LS-GRAM was, however, much smaller than the coverage of the English (Riezler et al., 2002) or German grammar in ParGram. An effort which is closer in spirit to ParGram is the implemention of grammar development platforms for HPSG. In the Verbmobil project (Wahlster, 2000) , HPSG grammars for English, German, and Japanese were developed on two platforms: LKB (Copestake, 2002) and PAGE. The PAGE system, developed and maintained in the Language Technology Lab of the German National Research Center on Artificial Intelligence DFKI GmbH, is an advanced NLP core engine that facilitates the development of grammatical and lexical resources, building on typed feature logics. To evaluate the HPSG platforms and to compare their merits with those of XLE and the ParGram projects, one would have to organize a special workshop, particularly as the HPSG grammars in Verbmobil were written for spoken language, characterized by short utterances, whereas the LFG grammars were developed for parsing technical manuals and/or newspaper texts. There are some indications that the German and English grammars in ParGram exceed the HPSG grammars in coverage (see (Crysmann et al., 2002) on the German HPSG grammar).This paper is organized as follows. We first provide a history of the project. Then, we discuss how parallelism is maintained in the project. Finally, we provide a summary and discussion.
0
Deep generative models attract a lot of attention in recent years (Hu et al., 2017b) . Such methods as variational autoencoders (Kingma and Welling, 2013) or generative adversarial networks (Goodfellow et al., 2014) are successfully applied to a variety of machine vision problems including image generation (Radford et al., 2017) , learning interpretable image representations (Chen et al., 2016) and style transfer for images (Gatys et al., 2016) . However, natural language generation is more challenging due to many reasons, such as the discrete nature of textual information (Hu et al., 2017a) , the absence of local information continuity and non-smooth disentangled representations (Bowman et al., 2015) . Due to these difficulties, text generation is mostly limited to specific narrow * Equal contribution applications and is usually working in supervised settings.Content and style are deeply fused in natural language, but style transfer for texts is often addressed in the context of disentangled latent representations (Hu et al., 2017a; Shen et al., 2017; Fu et al., 2018; Romanov et al., 2018; Tian et al., 2018) . Intuitive understanding of this problem is apparent: if an input text has some attribute A, a system generates new text similar to the input on a given set of attributes with only one attribute A changed to the target attributeÃ. In the majority of previous works, style transfer is obtained through an encoder-decoder architecture with one or multiple style discriminators to learn disentangled representations. The encoder takes a sentence as an input and generates a style-independent content representation. The decoder then takes the content representation and the target style representation to generate the transformed sentence. In (Subramanian et al., 2018) authors question the quality and usability of the disentangled representations for texts and suggest an end-to-end approach to style transfer similar to an end-to-end machine translation.Contribution of this paper is three-fold: 1) we show that different style transfer architectures have varying results on test and that reporting error margins for various training re-runs of the same model is especially important for adequate assessment of the models accuracy, see Figure 1 ; 2) we show that BLEU (Papineni et al., 2002) between input and output and accuracy of style transfer measured in terms of the accuracy of a pre-trained external style classifier can be manipulated and naturally diverge from the intuitive goal of the style transfer task starting from a certain threshold; 3) new architectures that perform style transfer using improved latent representations are shown to outperform state of the art in terms of BLEU between output and human-written reformulations.
0
Code-mixing is a frequent phenomenon in user-generated content on social media. In linguistics, code-mixing traditionally refers to the embedding of linguistic units (phrases, words, morphemes) into an utterance of another language (Myers-Scotton, 1993) . In that sense, it can be distinguished from code-switching, which refers to a "juxtaposition within the same speech exchange of passages of speech belonging to two different grammatical systems or subsystems" (Gumperz, 1982) , where the alternation usually takes the form of two subsequent sentences. In the proposed research, code-mixing is considered as a phenomenon where linguistic units in Hindi are embedded in English text, or the other way around, but this can take place both at the sentence and word level. As a consequence, we will use the term code-mixing as an umbrella term that can imply both linguistic phenomena.The phenomenon of code-mixing frequently occurs in spoken languages, such as for instance a combination of English with Spanish (so-called Spanglish) or English with Hindi (so-called Hinglish). More recently, due to the rise of the web 2.0 and the proliferation of user-generated content on the internet, it is increasingly used in written text as well. This social media content is very important to automatically analyse the public opinion on products, politics or events (task of sentiment analysis), to analyse the different emotions of the public triggered by events (task of emotion detection), to observe trends, etc. Code-mixing is, however, very challenging for standard NLP pipelines, which are usually trained on large monolingual resources (e.g. English or Hindi). As a result, these tools cannot cope with code-mixing in the data. In addition, social media language is characterized by informal language use, containing a lot of abbreviations, spelling mistakes, flooding, emojis, emoticons and wrong grammatical constructions. In the case of Hinglish, an additional challenge is added because people do not only switch between languages (e.g. English and Hindi), but also use English phonetic typing to write Hindi words, instead of using the Devanagari script.In this paper, we propose a sentiment analysis approach for Hinglish tweets, containing a mix of English and transliterated Hindi. To this end, cross-lingual word embeddings for English and transliterated Hindi are constructed. The proposed research has been carried out in preparation of experiments for the SemEval 2020 shared task on sentiment analysis in code-mixed social media text (Das et al., 2020) . This task consists of predicting the sentiment (positive, negative, neutral) of a given code-mixed tweet. Whereas the SemEval task is designed for both English-Hindi and English-Spanish, we will only investigate sentiment analysis for English-Hindi code-mixed tweets in this research.The remainder of this paper is organized as follows. In Section 2., we summarize relevant related research, whereas Section 3. gives an overview of the data set used to train and evaluate the system. Section 4. describes our approach to sentiment analysis for code-mixed Hinglish data. In section 5., we report on the results and provide an analysis of the performance, while Section 6. concludes this paper and gives directions for future research.
0
Large pre-trained language models have enabled better performance on many NLP tasks-especially in few-shot settings (Brown et al., 2020; Schick and Schütze, 2021a; Wu and Dredze, 2020) . More informative representations of textual inputs often leads to much higher downstream performance on NLP applications, which explains the rapid and general adoption of models such as (Ro)BERT(a) (Devlin et al., 2019; Liu et al., 2019) , GPT-2 (Radford et al., 2019) , and T5 (Raffel et al., 2020) . However, while these models are often used to effectively encode inputs, fewer works have attempted to give models access to informative representations of labels as well.Figure 1: Overview of our approach, label semantic aware pre-training (LSAP). We collect utterance-intent pairs and create new pairs from unlabeled Reddit and Twitter data, convert the intents to natural language, concatenate the utterance and intent, noise the concatenated sequence, and train a sequence-to-sequence model to denoise the sequence.Most discriminative approaches to text classification only give the model access to label indices. A recent stream of work has obtained significant improvements in structured prediction tasks by using sequence-to-sequence (seq2seq) models to generate labels (Athiwaratkun et al., 2020; Paolini et al., 2021 ). Yet these generative approaches make use of label semantics-the meaning of class label names-only during fine-tuning and prediction. Thus, we propose Label Semantic Aware Pretraining (LSAP) to incorporate label semantics as well as input-label associations into the pre-training step (Figure 1 ). Our experiments show that LSAP yields higher performance with fewer fine-tuning examples in a variety of domains.Our contributions include the following:1. A method to incorporate label semantics into generative models during pre-training. 2. A method for creating utterance-intent pairs for label semantic aware pre-training from unlabeled noisy data.3. State-of-the-art few-shot performance on intent and topic classification datasets.Our code is publicly available. 1
0
Most Arabic natural language processing (NLP) tools and resources are developed to serve Modern Standard Arabic (MSA), the official written language in the Arab World. Using such tools to understand and process Dialectal Arabic (DA) is a challenging task because of the phonological and morphological differences between DA and MSA. In addition, there is no standard orthography for DA, which only complicates matters more. Some DA varieties, notably Egyptian Arabic, have received some attention lately and have a growing collection of resources that include annotated corpora and morphological analyzers and taggers. Gulf Arabic (GA), broadly defined as the variety of Arabic spoken in the countries of the Gulf Cooperation Council (Bahrain, Kuwait, Oman, Qatar, Saudi Arabia, and the United Arab Emirates), however, lags behind in that respect. In this paper, we present the Gumar Corpus, 1 a large-scale corpus of GA that includes a number of sub-dialects. We also present preliminary results on GA morphological annotation. Building a morphologically annotated GA corpus is a first step towards developing NLP applications, for searching, retrieving, machine-translating, and spellchecking GA text among other applications. The importance of processing and understanding GA text (as with all DA text) is increasing due to the exponential growth of socially generated dialectal content in social media and printed works (Sarnākh, 2014) , in addition to existing materials such as folklore and local proverbs that are found scattered on the web. The rest of this paper is structured as follows. We present some related work in Dialectal Arabic NLP in Section 2. This is followed by a background discussion on GA in Section 3. We then discuss the collection of the corpus and describe its genre in Section 4. We present our preliminary annotation study and evaluate it in Section 5. Finally, we present the Gumar Corpus web interface in Section 6. 1 Gumar /gumEr/ is the word for 'moon' in Gulf Arabic.
0
In the world of globalization and internationalization being multilingual allows for more business opportunities. This drives more individuals to learn additional languages, which in turn increases the number of language exams such as TOEFL and IELTS for English, TCF, DELF and DALF for French, telc, TestDaF, and Goethe-Institut for German, taken a few times every year. The Common European Framework of Reference (CEFR) offers a generalized scoring system of language proficiency of learners that consists of 6 levels independent of the language: A1, A2, B1, B2, C1 and C2. Automated Essay Scoring (AES) represents the task of automatically assessing texts written by learners using natural language processing tools. The verification and validation of a new AES approach are part of the REPROLANG 2020 challenge 1 along with many other research topics in the area of natural language processing. The goal of our work is to reproduce the results published in the original, candidate paper (Vajjala and Rama, 2018) , that explores the possibility of a multilingual approach of classifying texts and to extend their approach with a new corpus. A multilingual model represents a model trained on multiple languages and capable of classifying texts in multiple languages. In our paper, we discuss several issues:• Would building a multilingual model instead of a monolingual one have a great impact on the prediction metrics?• Which features could improve the prediction metrics for multilingual models? What is their impact on the monolingual model?• What are the limitations of the current model and how can it be improved?The remainder of our paper is organized as follows. Section 2. gives a short overview of the State of the Art research on AES approaches. A short description of the used corpora is presented in section 3., followed by the methodology applied in this paper in section 4. Section 5. shows the results of reproducing the original paper's experiments. Furthermore, section 6. describes the cross-lingual experiments. Additionally, the data-set is augmented and experimented with in section 7. Lastly, we give conclusions relevant to our research in section 9.
0
Cognitive task analysis (CTA) is a powerful tool for training, instructional design, and development of expert systems (Woods et al., 1989; Clark and Estes, 1996) focusing on yielding the knowledge and thought processes from domain experts (Schraagen et al., 2000) . Traditional CTA methods require interviews with domain experts and parsing the interview transcript (transcript) into structured text describing processes (protocol, shown in Fig. 1 ). However, parsing transcripts requires 1 Code is available at: https://github.com/cnrpman/procedural-extraction Figure 1 : An example of CTA interview transcript and the human parsed structured text (protocol). In the protocol, splitting by the highlighted line numbers indicating the sources in transcript, phrases in protocol (called protocol phrases) are abstractive description of actions in the transcript. In the transcript, the highlighted numbers are line numbers, and the bolded are text spans matched by protocol phrases. The highlighted line numbers are provided by human parsing which provide constraint on mapping protocol phrases back to the transcript, but they are noisy and pointing back to a large scope of sentences, instead of the text span we want to extract.heavy human labor, which becomes the major hurdle of scaling up CTA. Therefore, automated approaches to extract structured knowledge from CTA interview transcripts are important for expert systems using massive procedural data.A natural realization of automated CTA is to apply relation extraction (RE) models to parse interview text. However, the key challenge here is the lack of direct sentence-level supervision data for training RE models because the only available supervision, protocols, are document-level transcripts summaries. Furthermore, the information towards relations between procedural actions spreads all over the transcripts, which bur- Figure 2 : The framework of Automated CTA Transcripts Parsing. Text spans are extracted via the sequence labeling model, then the relations between text spans are extracted by the text span-pair relation extraction model (span-pair RE model). In the end we assemble the results into structured knowledge (flowchart) for CTA. dens the RE model to process global information of the text. One previous work (Park and Motahari Nezhad, 2018) studies extracting procedure information on well-structured text using OpenIE and sentence pair RE models. In this work, however, we focus on unstructured conversational text (i.e., CTA interview transcripts) for which OpenIE is inapplicable.To address the above challenges, we develop a novel method to effectively extract and leverage weak(in-direct) supervision signals from protocols. The key observation is that these protocols are structured in the phrase level (c.f. Fig. 1 ). We split each protocol into a set of protocol phrases. Each protocol phrase is associated with a line number that points back to one sentence in the original transcript. Then, we can map these protocol phrases back to text spans in transcript sentences and obtain useful supervision signals from three aspects. First, these matched text spans provide direct supervision labels for training text span extraction model. Second, the procedural relations between protocols phrases are transformed into relations between text spans within sentences, which enables us to train RE models. Finally, the local contexts around text spans provide strong signals and can enhance the mention representation in all RE models.Our approach consists of following steps: (1) parse original protocol into a collection of protocol phrases together with their procedural relations, using a deterministic finite automation (DFA); (2) Match the protocol phrases back to the text spans in transcripts using fuzzy matching (Pennington et al., 2014; Devlin et al., 2018) ; (3) Generate text span extraction dataset and train a sequence labeling model (Finkel et al., 2005; Liu et al., 2017) for text span extraction; (4) Generate text spanpair relation extraction (span-pair RE) dataset and fine-tune pre-trained context-aware span-pair RE model (Devlin et al., 2018) . With the trained models, we can automatically extract text spans summarizing actions from transcripts along with the procedural relations among them. Finally, we assemble the results into protocol knowledge, which lays the foundation for CTA.We explore our approaches from manifold aspects: (i) We experimented different fuzzy matching methods, relation extraction models and sequence labeling models; (ii) We present models for solving context-aware span-pair RE; (iii) We evaluate the approach on real-world data with human annotations, which demonstrates the best fuzzy matching method achieves 47.1% mention level accuracy, best sequence labeling model achieves 38.18% token level accuracy, and best text span-pair relation extraction model achieves 74.4% micro F 1 .
0
Over the last few decades, there has been a call to enable speakers of indigenous minority languages to participate in government, education, and other domains of public life in their own language. Computational resources can play an important role in such efforts (Probst et al., 2002) . For example, semantically annotated corpora for minority languages can be used for information extraction to obtain situational awareness in disaster situations (Griffitt et al., 2018) , to link unstructured text in various languages to structured knowledge bases (Zhang and Rettinger, 2014) , and as scaffolding for machine translation into these languages . As of 2019, only 1705 out of the 7795 languages in Simons and Thomas (2019) , 22%, had any digital support. Even for languages with large native speaker populations and considerable political standing such as Farsi, computational resources are often limited (Feely et al., 2014) .The limited availability of (digital) data in such minority languages is only one hurdle to the creation of computational resources. Wherever data are available, they need to be provided with semantic annotations in order to be made maximally useful for the purposes described above.Semantic annotation allows unstructured text to be linked to representations such as Abstract Meaning Representations (AMR, Banarescu et al., 2013) or Discourse Representation Structures (DRS, Kamp and Reyle, 2013; Bos et al., 2017) . Such annotation schemes have become more crosslinguistically informed over the years. The DARPA Low Resource Languages for Emerging Incidents project (LORELEI), for one, has conducted shared annotation tasks with languages such as Tagalog, Yoruba and Somali (Griffitt et al., 2018) . The Uniform Meaning Representation project, on the other hand, aims to make English-based AMR crosslinguistically applicable (Van Gysel et al., 2021) .In practice, however, current annotation workflows have little chance of being applied to truly "no-resource" languages. Semantic annotation is typically done by speakers of the target language, as it is assumed that (native) speaker intuitions are necessary to make judgments required for semantic annotation. This may be feasible for "low-resource" languages with millions of speakers such as Oromo, Tigrinya, Uyghur, and Ukrainian -the "incident languages" in Griffitt et al. (2018) . For many others, including most of the 1500 languages with fewer than 1000 speakers (Eberhard et al., 2020) , such annotators are unlikely to be available for several reasons (see section 2). This paper therefore has two main goals. Firstly, it assesses whether the structure of UMR indeed makes it scalable to languages with a different typological profile than traditionally well-represented languages in NLP such as English and Mandarin. Secondly, it assesses whether non-speakers of an indigenous language trained in typological linguistics can successfully perform UMR semantic annotation of such languages based on morpheme-level glosses, utterance-level free translations, grammars, and dictionaries.Specifically, we present quantitative results of two annotation experiments using UMR to annotate texts in Kukama (Tupían, Peru), and Arapaho (Algonquian, US), and qualitative results of initial annotation efforts with Sanapaná (Enlhet-Enenlhet, Paraguay) and Navajo (Athabaskan, US). These four languages were chosen because (1) they represent a range of resource availability from noresource (Sanapaná) to low-resource (Arapaho, see Sections 2.1-2.4), (2) they are typologically diverse, representing more isolating (Kukama), agglutinating (Sanapaná), and polysynthetic (Arapaho, Navajo) types, and (3) co-authors of this paper have significant expertise in them. In section 2, the advantages of a workflow using linguistically trained non-speakers as annotators are laid out, and its necessity is illustrated through sociolinguistic sketches of the four languages at hand. Section 3 introduces UMR. Section 4 presents an overview of theoretical issues relating to the UMR guidelines encountered during the annotation of these four languages. Sections 5-6 present the inter-annotator agreement and adjudication results of the Kukama and Arapaho annotation experiments. Section 7 presents an overview of difficulties with the annotation workflow used in these two experiments.
0
The annotation type known as interlinear glossing allows linguists to describe the morphosyntactic makeup of words concisely and languageindependently. While glosses as a linguistic metalanguage have a long tradition, systematic standards for interlinear glossing have only developed relatively recently -cf. e.g. the Leipzig glossing rules. 1 An example for an interlinear gloss is shown in (1), which is an Ewe serial verb construction taken from (Collins, 1997) with glosses in boldface. The combination of segmentation with English metalanguage labels for both lexical and grammatical segments allows linguists to observe how exactly the Ewe serial verb construction differs from the corresponding English construction. With the development of annotated linguistic corpora of various languages, glosses are starting to be used in a new way. Traditionally, only individual sentences or small text collections were glossed to illustrate examples. Nowadays glosses are systematically added to large corpora in order to provide structural information necessary for quantitative cross-linguistic research.Despite their great value for linguistic research, glossed corpora often remain rather small. The main reason for this is the fact that glossing requires a high level of linguistic expertise and is currently performed manually by trained experts. This practice makes the creation of glossed corpora extremely time-consuming and expensive. In order to obtain glossed corpora large enough for reliable quantitative analysis, the process of glossing needs to be automatised.In this paper, we present a series of experiments performed with this precise aim. 2 We divide the traditional glossing procedure into several steps and define an automatic processing pipeline, which consists of some standard and some custom natural language processing tasks. The data we use for our experiments come from the Chintang Language Corpus (Bickel et al., 2004 (Bickel et al., 2015 , an exceptionally large glossed corpus, which has been developed since 2004 and is presently hosted at the Department of Comparative Linguistics at the University of Zurich. 3
0
Adversarial attacks have recently been quite successful in foiling neural text classifiers (Jia and Liang, 2017; Ebrahimi et al., 2018) . Universal adversarial attacks (Wallace et al., 2019; Behjati et al., 2019) are a sub-class of these methods where the same attack perturbation can be applied to any input to the target classifier. These attacks, being input-agnostic, point to more serious shortcomings in trained models since they do not require re-generation for each input. However, the attack sequences generated by these methods are often meaningless and irregular text (e.g., "zoning tapping fiennes" from Wallace et al. (2019) ). While § Equal contribution 1 Our code is available at https://github.com/ Hsuan-Tung/universal_attack_natural_ trigger.human readers can easily identify them as unnatural, one can also use simple heuristics to spot such attacks. For instance, the words in the above attack trigger have an average frequency of 14 compared to 6700 for words in benign inputs in the Stanford Sentiment Treebank (SST) (Socher et al., 2013) .In this paper, we design natural attack triggers by using an adversarially regularized autoencoder (ARAE) (Zhao et al., 2018a) , which consists of an auto-encoder and a generative adversarial network (GAN). We develop a gradient-based search over the noise vector space to identify triggers with a good attack performance. Our method -Natural Universal Trigger Search (NUTS) -uses projected gradient descent with l 2 norm regularization to avoid using out-of-distribution noise vectors and maintain the naturalness of text generated. 2 Our attacks perform quite well on two different classification tasks -sentiment analysis and natural language inference (NLI). For instance, the phrase combined energy efficiency, generated by our approach, results in a classification accuracy of 19.96% on negative examples on the Stanford Sentiment Treebank (Socher et al., 2013) . Furthermore, we show that our attack text does better than prior approaches on three different measures -average word frequency, loss under the GPT-2 language model (Radford et al., 2019) , and errors identified by two online grammar checking tools (scr; che). A human judgement study shows that up to 77% of raters find our attacks more natural than the baseline and almost 44% of humans find our attack triggers concatenated with benign inputs to be natural. This demonstrates that using techniques similar to ours, adversarial attacks could be made much harder to detect than previously thought and we require the development of appropriate defenses in the long term for securing our NLP models.
0
As shown in Figure 1 there are several variations in annotations of dependencies. A famous example is a head choice in a prepositional phrase (e.g, to a bar), which diverges in the two trees. Though various annotation schemes have been proposed so far (Hajic et al., 2001; Johansson and Nugues, 2007; de Marneffe and Manning, 2008; McDonald et al., 2013) , recently the Universal Dependencies (UD) (de Marneffe et al., 2014) gains much popularity and is becoming the annotation standard across languages. The upper tree in Figure 1 is annotated in UD.Practically, however, UD may not be the optimal choice. In UD a content word consistently dominates a function word, but past work points out that this makes some parser decisions more difficult than the conventional style centering on function words, e.g., the tree in the lower part of Figure 1 (Schwartz et al., 2012; Ivanova et al., 2013) .To overcome this issue, in this paper, we show the effectiveness of a back-and-forth conversion approach where we train a model and parse sentences in an anontation format with higher parsability, and then reconvert the parser output into the UD scheme. Figure 1 shows an example of our conversion. We use the function head trees (below) as an intermediate representation.This is not the first attempt to improve dependency parsing accuracy with tree conversions. The positive result is reported in Nilsson et al. (2006) using the Prague Dependency Treebank. For the conversion of content and function head in UD, however, the effect is still inconclusive. Using English UD data, Silveira and Manning (2015) report the negative result, which they argue is due to error propagation at backward conversions, in particular in copula constructions that often incur drastic changes of the structure. Rosa (2015) report the advantage of funcion head in the adposition construction, but the data is HamleDT (Zeman et al., 2012) rather than UD and the conversion target is conversely too restrictive.Our main contribution is to show that the backand-forth conversion can bring consistent accuracy improvements across languages in UD, by limiting the conversion targets to simpler ones around function words while covering many linguistic phenomena. Another limitation in previous work is the parsers: MSTParser or MaltParser is often used, but they are not state-of-the-art today. We complement this by showing the effectiveness of our approach even with a modern parser with rich features. We also provide an in-dpeth analysis to explore when and why our conversion brings higher parsability than the orignal UD.
0
Monolingual sentence rewriting encompasses a variety of tasks for which the goal is to generate an output sentence with similar meaning to an input sentence, in the same language. The generated sentences can be called sentential paraphrases. Some tasks that generate sentential paraphrases include sentence simplification, compression, grammatical error correction, or expanding multiple reference sets for machine translation. For researchers not focused on these tasks, it can be difficult to develop a one-off system due to resource requirements.To address this need, we are releasing a black box for generating sentential paraphrases: machine translation language packs. The language packs consist of prepackaged models for the Joshua 6 decoder (Post et al., 2015 ) and a monolingual "translation" grammar derived from the Paraphrase Database (PPDB) 2.0 (Pavlick et al., 2015) . The PPDB provides tremendous coverage over English text, containing more than 200 million paraphrases extracted from 100 million sentences (Ganitkevitch et al., 2013) . For the first time, any researcher with Java 7 and Unix (there are no other dependencies) can generate sentential paraphrases without developing their own system. Additionally, the language packs include a web tool for interactively paraphrasing sentences and adjusting the parameters.The language packs contain everything needed to generate sentential paraphrases in English:• a monolingual synchronous grammar, • a language model, • a ready-to-use configuration file,• the Joshua 6 runtime, so that no compilation is necessary, • a shell script to invoke the Joshua decoder, and • a web tool for interactive decoding and parameter configuration.The system is invoked by a single command, either on a batch of sentences or as an interactive server. Users can choose which size grammar to include in the language pack, corresponding to the PPDB pack sizes (S through XXXL).In the rest of the paper, we will describe the translation model and grammar, provide examples of output, and explain how the configuration can be adjusted for specific needs.
0
Recent years have seen growing interest in the task of Semantic Role Labeling (SRL) of natural language text (sometimes called "shallow semantic parsing"). The task is usually described as the act of identifying the semantic roles, which are the set of semantic properties and relationships defined over constituents of a sentence, given a semantic context. The creation of resources that document the realization of semantic roles in natural language texts, such as FrameNet (Fillmore and Baker, 2010; Ruppenhofer et al., 2010) and PropBank (Kingsbury and Palmer, 2002) have advanced the field of semantic analysis no end and have allowed the development of learning algorithms for automatically analyzing the semantic structure of text. Shallow semantic analysis has been shown to contribute to the advancement of a wide spectrum of natural language processing tasks, ranging from information extraction (Surdeanu et al., 2003) and question answering (Shen and Lapata, 2007) , to machine translation (Wu and Fung, 2009) and abstractive summarization (Melli et al., 2005) .FrameNet (Fillmore and Baker, 2010; Ruppenhofer et al., 2010 ) is a human-annotated linguistic resource with rich semantic content based on the linguistic theory of Frame Semantics proposed by Fillmore (1982) . FrameNet defines a formal structure for semantic frames, and various relationships between and within them. Each frame contains a list of frame-evoking words which also serve as the predicates of events described by the frames. These words are called Frame Evoking Elements (FEEs) or Lexical Units (LUs). Additionally, each frame defines a list of event-participants and a list of constraints on and relationships between these participants. The participants are called Frame Elements (or FEs). Finally 1 , and perhaps most importantly, FrameNet contains human-annotated examples of realizations of frames and their structures in natural language.The original FrameNet project has been adapted and ported to multiple languages. The most active international FrameNet teams include the Swedish FrameNet (SweFN) covering close to 1,200 frames with 34K LUs (Ahlberg et al., 2014) ; the Japanese FrameNet (JFN) with 565 frames, 8,500 LUs and 60K annotated example sentences (Ohara, 2013) ; and FrameNet Brazil (FN-Br) covering 179 frames, 196 LUs and 12K annotated sentences (Torrent and Ellsworth, 2013) . Inspired by the ideas developed by the Swedish FrameNet++ project (Friberg Heppin and Voionmaa, 2012) and by Petruck (Petruck, 2005; Petruck, 2009) and Boas (Boas, 2011) , we have started the development of a Hebrew FrameNet, a semi-automatic translation of the English FrameNet. In this paper, we present this new resource, the methods we used to develop it and the specific linguistic issues we faced while addressing frame annotations in a morphologically rich language like Hebrew. In the rest of this paper, we first present the linguistic resources and supporting infrastructure used as a starting point for the project. We then discuss the process adopted to develop the Hebrew FrameNet resource, the tools developed, and how we addressed the linguistic issues we faced. Finally, we present the current state of the project and discuss future work. The project includes a collaborative web-based annotation tool 2 which supports browsing, annotating and searching the Hebrew FrameNet. We are starting to train and test automatic Hebrew SRL systems on the annotated data that we are collecting.
0
Privacy has emerged as a topic of strategic consequence across all computational fields. Differential Privacy (DP) is a mathematical definition of privacy proposed by (Dwork et al., 2006) . Ever since its introduction, DP has been widely adopted and as of today, it has become the de facto privacy definition in the academic world with also wide adoption in industry, e.g., (Erlingsson et al., 2014; Dajani et al., 2017; Team, 2017; Uber Security, 2017) . DP provides provable protection against adversaries with arbitrary side information and computational power, allows clear quantification of privacy losses, and satisfies graceful composition over multiple access to the same data. In DP, two parameters and δ control the level of privacy. Very roughly, is an upper bound on the amount of influence a single data point has on the information released and δ is the probability that this bound fails to hold, so the definition becomes more stringent as , δ → 0.The definition with δ = 0 is referred to as pure differential privacy, and with δ > 0 is referred to as approximate differential privacy.Within the field of Natural Language Processing (NLP), the traditional approach for privacy was to apply anonymization techniques such as kanonymity (Sweeney, 2002) and its variants. While this offers an intuitive way of expressing privacy guarantees as a function of an aggregation parameter k, all such methods are provably non-private (Korolova et al., 2009) . Given the sheer increase in data gathering occurring across a multiplicity of connected platforms -a great number of which is being done via user generated voice conversations, text queries, or other language based metadata (e.g., user annotations), it is imperative to advance the development of DP techniques in NLP.Vector embeddings are a popular approach for capturing the "meaning" of text and a form of unsupervised learning useful for downstream tasks. Word embeddings were popularized via embedding schemes such as WORD2VEC (Mikolov et al., 2013) , GLOVE (Pennington et al., 2014) , and FAST-TEXT (Bojanowski et al., 2017) . There is also a growing literature on creating embeddings for sentences, documents, and other textual entities, in addition to embeddings in other domains such as in computer vision (Goodfellow et al., 2016) .Recent works such as (Fernandes et al., 2019; Feyisetan et al., 2019 have attempted to directly adapt the methods of DP to word embeddings by borrowing ideas from the privacy methods used for map location data . In the DP literature, one standard way of achieving privacy is by adding properly calibrated noise to the output of a function (Dwork et al., 2006) . This is also the premise behind these previously proposed DP for text techniques, which are based on adding noise to the vector representation of words in a high dimensional embedding space and additional post-processing steps. The privacy guarantees of applying such a method is quite straightforward. However, the main issue is that the magnitude of the DP privacy noise scales with dimensionality of the vector, which leads to a considerable degradation to the utility when these techniques are applied to vectors produced through popular embedding techniques. In this paper, we seek to overcome this curse of dimensionality arising through the differential privacy requirement. Also unlike previous results which were focused on word embeddings, we focus on the general problem of privately releasing vector embeddings, thus making our scheme more widely applicable.Vector representations of words, sentences, and documents, have all become basic building blocks in NLP pipelines and algorithms. Hence, it is natural to consider privacy mechanisms that target these representations. The most relevant to this paper is the privacy mechanism proposed in that works by computing the vector representation x of a word in the embedding space, applying noise N calibrated to the global metric sensitivity to obtain a perturbed vector v = x + N , and then swapping the original word another word whose embedding is closest to v. showed that this mechanism satisfies the ( , 0)-Lipschitz privacy definition. However, the issue with this mechanism is that the magnitude (norm) of the added noise is proportional to d, which we avoid by projecting these vectors down before the noise addition step. Our focus here is also more general and not just on word embeddings. Additionally, we provide theoretical guarantees on our privatized vectors. We experimentally compare with this approach.The privacy mechanisms of (Fernandes et al., 2019; Feyisetan et al., 2019) are also based on similar noise addition ideas. However, (Fernandes et al., 2019) utilized the Earth mover metric to measure distances (instead of Euclidean), and (Feyisetan et al., 2019) perturb vector representations of words in high dimensional Hyperbolic space (instead of a real space). In this paper, we focus on the Euclidean space as it captures the most common choice of metric space with vector models.Over the past decade, a large body of work has been developed to design basic algorithms and tools for achieving DP, understanding the privacyutility trade-offs in different data access setups, and on integrating DP with machine learning and statistical inference. We refer the reader to (Dwork and Roth, 2013) for a more comprehensive overview.Dimensionality reduction for word embeddings using PCA was explored in (Raunak et al., 2019) for computational efficiency purposes. In this paper, we use random projections for dimensionality reduction that helps with reducing the magnitude of noise needed for privacy. Another issue with PCA like scheme is that there are strong lower bounds (that scale with dimension of the vectors d) on the amount of distortion needed for achieving differentially private PCA in the local privacy model (Wang and Xu, 2020) .Random projections have been used as a tool to design differentially private algorithms in other problem settings too (Blocki et al., 2012; Wang et al., 2015; Kenthapadi et al., 2013; Zhou et al., 2009; Kasiviswanathan and Jin, 2016) .
0