diff --git "a/data/validation.json" "b/data/validation.json" --- "a/data/validation.json" +++ "b/data/validation.json" @@ -1,500 +1,500 @@ -{"ID":"adriaens-1989-parallel","url":"https:\/\/aclanthology.org\/W89-0232","title":"The Parallel Expert Parser: A Meaning-Oriented, Lexically-Guided, Parallel-Interactive Model of Natural Language Understanding","abstract":"International Parsing Workshop '89","label_nlp4sg":0,"task":null,"method":null,"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":1989,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} -{"ID":"bandyopadhyay-etal-2021-university","url":"https:\/\/aclanthology.org\/2021.wmt-1.46","title":"The University of Maryland, College Park Submission to Large-Scale Multilingual Shared Task at WMT 2021","abstract":"This paper describes the system submitted to Large-Scale Multilingual Shared Task (Small Task #2) at WMT 2021. It is based on the massively multilingual open-source model FLO-RES101_MM100 model, with selective finetuning. Our best-performing system reported a 15.72 average BLEU score for the task.","label_nlp4sg":0,"task":null,"method":null,"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2021,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} -{"ID":"zerva-ananiadou-2015-event","url":"https:\/\/aclanthology.org\/W15-3804","title":"Event Extraction in pieces:Tackling the partial event identification problem on unseen corpora","abstract":"Biomedical event extraction systems have the potential to provide a reliable means of enhancing knowledge resources and mining the scientific literature. However, to achieve this goal, it is necessary that current event extraction models are improved, such that they can be applied confidently to unseen data with a minimal rate of error. Motivated by this requirement, this work targets a particular type of error, namely partial events, where an event is missing one or more arguments. Specifically, we attempt to improve the performance of a state-of-the-art event extraction tool, EventMine, when applied to a new cancer pathway curation corpus. We propose a post-processing ranking approach based on relaxed constraints, in order to reconsider the candidate arguments for each event trigger, and suggest possible new arguments. The proposed methodology, applicable to the output of any event extraction system, achieves an improvement in argument recall of 2%-4% when applied to EventMine output, and thus constitutes a promising direction for further developments.","label_nlp4sg":1,"task":null,"method":null,"goal1":"Good Health and Well-Being","goal2":null,"goal3":null,"acknowledgments":"This work was supported by the DARPA funded Big Mechanism Project, as well as by the EPSRC funded Centre for Doctoral Training in Computer Science scholarship. We would like to thank Dr. Riza Theresa Batista-Navarro and Dr. Ioannis Korkontzelos for the useful discussions and feedback at critical points. Finally, we would like to thank our referees for their constructive input.","year":2015,"sdg1":0,"sdg2":0,"sdg3":1,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} -{"ID":"clarke-2009-context","url":"https:\/\/aclanthology.org\/W09-0215","title":"Context-theoretic Semantics for Natural Language: an Overview","abstract":"We present the context-theoretic framework, which provides a set of rules for the nature of composition of meaning based on the philosophy of meaning as context. Principally, in the framework the composition of the meaning of words can be represented as multiplication of their representative vectors, where multiplication is distributive with respect to the vector space. We discuss the applicability of the framework to a range of techniques in natural language processing, including subsequence matching, the lexical entailment model of Dagan et al. (2005), vector-based representations of taxonomies, statistical parsing and the representation of uncertainty in logical semantics.","label_nlp4sg":0,"task":null,"method":null,"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"I am very grateful to my supervisor David Weir for all his help in the development of these ideas, and to Rudi Lutz and the anonymous reviewers for many useful comments and suggestions.","year":2009,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} -{"ID":"yamakoshi-etal-2021-evaluation","url":"https:\/\/aclanthology.org\/2021.wat-1.12","title":"Evaluation Scheme of Focal Translation for Japanese Partially Amended Statutes","abstract":"For updating the translations of Japanese statutes based on their amendments, we need to consider the translation \"focality;\" that is, we should only modify expressions that are relevant to the amendment and retain the others to avoid misconstruing its contents. In this paper, we introduce an evaluation metric and a corpus to improve focality evaluations. Our metric is called an Inclusive Score for DIfferential Translation: (ISDIT). ISDIT consists of two factors: (1) the n-gram recall of expressions unaffected by the amendment and (2) the n-gram precision of the output compared to the reference. This metric supersedes an existing one for focality by simultaneously calculating the translation quality of the changed expressions in addition to that of the unchanged expressions. We also newly compile a corpus for Japanese partially amendment translation that secures the focality of the post-amendment translations, while an existing evaluation corpus does not. With the metric and the corpus, we examine the performance of existing translation methods for Japanese partially amendment translations.","label_nlp4sg":1,"task":null,"method":null,"goal1":"Decent Work and Economic Growth","goal2":"Partnership for the goals","goal3":"Peace, Justice and Strong Institutions","acknowledgments":"This work was partly supported by JSPS KAK-ENHI Grant Number 18H03492 and 21H03772.","year":2021,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":1,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":1,"sdg17":1} -{"ID":"kalpakchi-boye-2021-bert","url":"https:\/\/aclanthology.org\/2021.inlg-1.43","title":"BERT-based distractor generation for Swedish reading comprehension questions using a small-scale dataset","abstract":"An important part when constructing multiplechoice questions (MCQs) for reading comprehension assessment are the distractors, the incorrect but preferably plausible answer options. In this paper, we present a new BERTbased method for automatically generating distractors using only a small-scale dataset. We also release a new such dataset of Swedish MCQs (used for training the model), and propose a methodology for assessing the generated distractors. Evaluation shows that from a student's perspective, our method generated one or more plausible distractors for more than 50% of the MCQs in our test set. From a teacher's perspective, about 50% of the generated distractors were deemed appropriate. We also do a thorough analysis of the results.","label_nlp4sg":0,"task":null,"method":null,"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"This work was supported by Vinnova (Sweden's Innovation Agency) within project 2019-02997. We would like to thank the anonymous reviewers for their comments, as well as Gabriel Skantze and Bram Willemsen for their helpful feedback prior to the submission of the paper.","year":2021,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} -{"ID":"lee-etal-2018-character","url":"https:\/\/aclanthology.org\/C18-1273","title":"Character-Level Feature Extraction with Densely Connected Networks","abstract":"Generating character-level features is an important step for achieving good results in various natural language processing tasks. To alleviate the need for human labor in generating hand-crafted features, methods that utilize neural architectures such as Convolutional Neural Network (CNN) or Recurrent Neural Network (RNN) to automatically extract such features have been proposed and have shown great results. However, CNN generates position-independent features, and RNN is slow since it needs to process the characters sequentially. In this paper, we propose a novel method of using a densely connected network to automatically extract character-level features. The proposed method does not require any language or task specific assumptions, and shows robustness and effectiveness while being faster than CNN-or RNN-based methods. Evaluating this method on three sequence labeling tasks-slot tagging, Part-of-Speech (POS) tagging, and Named-Entity Recognition (NER)-we obtain state-of-the-art performance with a 96.62 F1-score and 97.73% accuracy on slot tagging and POS tagging, respectively, and comparable performance to the state-of-the-art 91.13 F1-score on NER.","label_nlp4sg":0,"task":null,"method":null,"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2018,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} -{"ID":"ananthakrishnan-etal-2010-semi","url":"https:\/\/aclanthology.org\/W10-2916","title":"A Semi-Supervised Batch-Mode Active Learning Strategy for Improved Statistical Machine Translation","abstract":"The availability of substantial, in-domain parallel corpora is critical for the development of high-performance statistical machine translation (SMT) systems. Such corpora, however, are expensive to produce due to the labor intensive nature of manual translation. We propose to alleviate this problem with a novel, semisupervised, batch-mode active learning strategy that attempts to maximize indomain coverage by selecting sentences, which represent a balance between domain match, translation difficulty, and batch diversity. Simulation experiments on an English-to-Pashto translation task show that the proposed strategy not only outperforms the random selection baseline, but also traditional active learning techniques based on dissimilarity to existing training data. Our approach achieves a relative improvement of 45.9% in BLEU over the seed baseline, while the closest competitor gained only 24.8% with the same number of selected sentences.","label_nlp4sg":0,"task":null,"method":null,"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2010,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} -{"ID":"zhao-grishman-2005-extracting","url":"https:\/\/aclanthology.org\/P05-1052","title":"Extracting Relations with Integrated Information Using Kernel Methods","abstract":"Entity relation detection is a form of information extraction that finds predefined relations between pairs of entities in text. This paper describes a relation detection approach that combines clues from different levels of syntactic processing using kernel methods. Information from three different levels of processing is considered: tokenization, sentence parsing and deep dependency analysis. Each source of information is represented by kernel functions. Then composite kernels are developed to integrate and extend individual kernels so that processing errors occurring at one level can be overcome by information from other levels. We present an evaluation of these methods on the 2004 ACE relation detection task, using Support Vector Machines, and show that each level of syntactic processing contributes useful information for this task. When evaluated on the official test data, our approach produced very competitive ACE value scores. We also compare the SVM with KNN on different kernels.","label_nlp4sg":0,"task":null,"method":null,"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"This research was supported in part by the Defense Advanced Research Projects Agency under Grant N66001-04-1-8920 from SPAWAR San Diego, and by the National Science Foundation under Grant ITS-0325657. This paper does not necessarily reflect the position of the U.S. Government. We wish to thank Adam Meyers of the NYU NLP group for his help in producing deep dependency analyses.","year":2005,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} -{"ID":"aloraini-etal-2020-neural","url":"https:\/\/aclanthology.org\/2020.crac-1.11","title":"Neural Coreference Resolution for Arabic","abstract":"No neural coreference resolver for Arabic exists, in fact we are not aware of any learning-based coreference resolver for Arabic since (Bj\u00f6rkelund and Kuhn, 2014). In this paper, we introduce a coreference resolution system for Arabic based on Lee et al's end-to-end architecture combined with the Arabic version of bert and an external mention detector. As far as we know, this is the first neural coreference resolution system aimed specifically to Arabic, and it substantially outperforms the existing state-of-the-art on OntoNotes 5.0 with a gain of 15.2 points conll F1. We also discuss the current limitations of the task for Arabic and possible approaches that can tackle these challenges.","label_nlp4sg":0,"task":null,"method":null,"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"This research was supported in part by the DALI project, ERC Grant 695662, in part by the Human Rights in the Era of Big Data and Technology (HRBDT) project, ESRC grant ES\/M010236\/1.","year":2020,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} -{"ID":"jager-etal-2017-using","url":"https:\/\/aclanthology.org\/E17-1113","title":"Using support vector machines and state-of-the-art algorithms for phonetic alignment to identify cognates in multi-lingual wordlists","abstract":"Most current approaches in phylogenetic linguistics require as input multilingual word lists partitioned into sets of etymologically related words (cognates). Cognate identification is so far done manually by experts, which is time consuming and as of yet only available for a small number of well-studied language families. Automatizing this step will greatly expand the empirical scope of phylogenetic methods in linguistics, as raw wordlists (in phonetic transcription) are much easier to obtain than wordlists in which cognate words have been fully identified and annotated, even for under-studied languages. A couple of different methods have been proposed in the past, but they are either disappointing regarding their performance or not applicable to larger datasets. Here we present a new approach that uses support vector machines to unify different state-of-the-art methods for phonetic alignment and cognate detection within a single framework. Training and evaluating these method on a typologically broad collection of gold-standard data shows it to be superior to the existing state of the art.","label_nlp4sg":0,"task":null,"method":null,"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"This research was supported by the ERC Advanced Grant 324246 EVOLAEMP (GJ, PS), the DFG-KFG 2237 Words, Bones, Genes, Tools (GJ),","year":2017,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} -{"ID":"chang-2020-taiwan","url":"https:\/\/aclanthology.org\/2020.rocling-1.38","title":"The Taiwan Biographical Database (TBDB): An Introduction","abstract":"In the future, we will continue to increase both the quality and quantity of the database and also develop new analysis tools.\nThis speech introduces the development of a text retrieval and mining system for Taiwanese historical people --Taiwan Biographical Database (TBDB). It describes the characteristics of personages in TBDB, highlights the system architecture and preliminary achievement of TBDB. Finally, this talk elaborates on the lessons learned through the creation of TBDB, and the future plans.","label_nlp4sg":0,"task":null,"method":null,"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2020,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} -{"ID":"pitenis-etal-2020-offensive","url":"https:\/\/aclanthology.org\/2020.lrec-1.629","title":"Offensive Language Identification in Greek","abstract":"As offensive language has become a rising issue for online communities and social media platforms, researchers have been investigating ways of coping with abusive content and developing systems to detect its different types: cyberbullying, hate speech, aggression, etc. With a few notable exceptions, most research on this topic so far has dealt with English. This is mostly due to the availability of language resources for English. To address this shortcoming, this paper presents the first Greek annotated dataset for offensive language identification: the Offensive Greek Tweet Dataset (OGTD). OGTD is a manually annotated dataset containing 4,779 posts from Twitter annotated as offensive and not offensive. Along with a detailed description of the dataset, we evaluate several computational models trained and tested on this data.","label_nlp4sg":1,"task":null,"method":null,"goal1":"Peace, Justice and Strong Institutions","goal2":null,"goal3":null,"acknowledgments":"We would like to acknowledge Maria, Raphael and Anastasia, the team of volunteer annotators that provided their free time and efforts to help us produce v1.0 of the dataset of Greek tweets for offensive language detection, as well as Fotini and that helped review tweets with ambivalent labels. Additionally, we would like to express our sincere gratitude to the LightTag team and especially to Tal Perry for granting us free use for their annotation platform.","year":2020,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":1,"sdg17":0} -{"ID":"xu-etal-2021-syntax","url":"https:\/\/aclanthology.org\/2021.acl-long.420","title":"Syntax-Enhanced Pre-trained Model","abstract":"We study the problem of leveraging the syntactic structure of text to enhance pre-trained models such as BERT and RoBERTa. Existing methods utilize syntax of text either in the pre-training stage or in the fine-tuning stage, so that they suffer from discrepancy between the two stages. Such a problem would lead to the necessity of having human-annotated syntactic information, which limits the application of existing methods to broader scenarios. To address this, we present a model that utilizes the syntax of text in both pre-training and fine-tuning stages. Our model is based on Transformer with a syntax-aware attention layer that considers the dependency tree of the text. We further introduce a new pre-training task of predicting the syntactic distance among tokens in the dependency tree. We evaluate the model on three downstream tasks, including relation classification, entity typing, and question answering. Results show that our model achieves state-of-the-art performance on six public benchmark datasets. We have two major findings. First, we demonstrate that infusing automatically produced syntax of text improves pre-trained models. Second, global syntactic distances among tokens bring larger performance gains compared to local head relations between contiguous tokens. 1 * Work is done during internship at Microsoft. \u2020 For questions, please contact D. Tang and Z. Xu.","label_nlp4sg":0,"task":null,"method":null,"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"We are grateful to Yeyun Gong, Ruize Wang ","year":2021,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} -{"ID":"aloraini-etal-2020-qmul","url":"https:\/\/aclanthology.org\/2020.wanlp-1.31","title":"The QMUL\/HRBDT contribution to the NADI Arabic Dialect Identification Shared Task","abstract":"We present the Arabic dialect identification system that we used for the country-level subtask of the NADI challenge. Our model consists of three components: BiLSTM-CNN, character-level TF-IDF, and topic modeling features. We represent each tweet using these features and feed them into a deep neural network. We then add an effective heuristic that improves the overall performance. We achieved an F1-Macro score of 20.77% and an accuracy of 34.32% on the test set. The model was also evaluated on the Arabic Online Commentary dataset, achieving results better than the state-of-the-art.","label_nlp4sg":0,"task":null,"method":null,"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"The research was in part supported by the UK Economic and Social Research Council (ESRC) through the Big Data Human Rights and Technology project (grant number ES\/M010236\/1).","year":2020,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} -{"ID":"van-de-cruys-villada-moiron-2007-semantics","url":"https:\/\/aclanthology.org\/W07-1104","title":"Semantics-based Multiword Expression Extraction","abstract":"This paper describes a fully unsupervised and automated method for large-scale extraction of multiword expressions (MWEs) from large corpora. The method aims at capturing the non-compositionality of MWEs; the intuition is that a noun within a MWE cannot easily be replaced by a semantically similar noun. To implement this intuition, a noun clustering is automatically extracted (using distributional similarity measures), which gives us clusters of semantically related nouns. Next, a number of statistical measures-based on selectional preferences-is developed that formalize the intuition of non-compositionality. Our approach has been tested on Dutch, and automatically evaluated using Dutch lexical resources.","label_nlp4sg":0,"task":null,"method":null,"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"This research was carried out as part of the research program IRME STEVIN project. We would also like to thank Gertjan van Noord and the two anonymous reviewers for their helpful comments on an earlier version of this paper.","year":2007,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} -{"ID":"wang-etal-2019-youmakeup","url":"https:\/\/aclanthology.org\/D19-1517","title":"YouMakeup: A Large-Scale Domain-Specific Multimodal Dataset for Fine-Grained Semantic Comprehension","abstract":"Multimodal semantic comprehension has attracted increasing research interests in recent years, such as visual question answering and caption generation. However, due to the data limitation, fine-grained semantic comprehension which requires to capture semantic details of multimodal contents has not been well investigated. In this work, we introduce \"YouMakeup\", a large-scale multimodal instructional video dataset to support finegrained semantic comprehension research in specific domain. YouMakeup contains 2,800 videos from YouTube, spanning more than 420 hours in total. Each video is annotated with a sequence of natural language descriptions for instructional steps, grounded in temporal video range and spatial facial areas. The annotated steps in a video involve subtle difference in actions, products and regions, which require fine-grained understanding and reasoning both temporally and spatially. In order to evaluate models' ability for fined-grained comprehension, we further propose two groups of tasks including generation tasks and visual question answering tasks from different aspects. We also establish a baseline of step caption generation for future comparison. The dataset will be publicly available at https:\/\/ github.com\/AIM3-RUC\/YouMakeup to support research investigation in fine-grained semantic comprehension.","label_nlp4sg":0,"task":null,"method":null,"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"This work was supported by National Natural Science Foundation of China (No. 61772535), Beijing Natural Science Foundation (No. 4192028), and National Key Research and Development Plan (No. 2016YFB1001202). We would like to thank our group member Jingjun Liang for his help in building the annotation website and all the annotators for their careful annotations.","year":2019,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} -{"ID":"pettersson-etal-2013-normalisation","url":"https:\/\/aclanthology.org\/W13-5617","title":"Normalisation of Historical Text Using Context-Sensitive Weighted Levenshtein Distance and Compound Splitting","abstract":"Natural language processing for historical text imposes a variety of challenges, such as to deal with a high degree of spelling variation. Furthermore, there is often not enough linguistically annotated data available for training part-of-speech taggers and other tools aimed at handling this specific kind of text. In this paper we present a Levenshtein-based approach to normalisation of historical text to a modern spelling. This enables us to apply standard NLP tools trained on contemporary corpora on the normalised version of the historical input text. In its basic version, no annotated historical data is needed, since the only data used for the Levenshtein comparisons are a contemporary dictionary or corpus. In addition, a (small) corpus of manually normalised historical text can optionally be included to learn normalisation for frequent words and weights for edit operations in a supervised fashion, which improves precision. We show that this method is successful both in terms of normalisation accuracy, and by the performance of a standard modern tagger applied to the historical text. We also compare our method to a previously implemented approach using a set of handwritten normalisation rules, and we see that the Levenshtein-based approach clearly outperforms the hand-crafted rules. Furthermore, the experiments were carried out on Swedish data with promising results and we believe that our method could be successfully applicable to analyse historical text for other languages, including those with less resources.","label_nlp4sg":0,"task":null,"method":null,"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2013,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} -{"ID":"farajian-etal-2017-multi","url":"https:\/\/aclanthology.org\/W17-4713","title":"Multi-Domain Neural Machine Translation through Unsupervised Adaptation","abstract":"We investigate the application of Neural Machine Translation (NMT) under the following three conditions posed by realworld application scenarios. First, we operate with an input stream of sentences coming from many different domains and with no predefined order. Second, the sentences are presented without domain information. Third, the input stream should be processed by a single generic NMT model. To tackle the weaknesses of current NMT technology in this unsupervised multi-domain setting, we explore an efficient instance-based adaptation method that, by exploiting the similarity between the training instances and each test sentence, dynamically sets the hyperparameters of the learning algorithm and updates the generic model on-the-fly. The results of our experiments with multi-domain data show that local adaptation outperforms not only the original generic NMT system, but also a strong phrase-based system and even single-domain NMT models specifically optimized on each domain and applicable only by violating two of our aforementioned assumptions.","label_nlp4sg":0,"task":null,"method":null,"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"This work has been partially supported by the ECfunded H2020 projects QT21 (grant no. 645452) and ModernMT (grant no. 645487).","year":2017,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} -{"ID":"lin-2008-stochastic","url":"https:\/\/aclanthology.org\/I08-4007","title":"Stochastic Dependency Parsing Based on A* Admissible Search","abstract":"Dependency parsing has gained attention in natural language understanding because the representation of dependency tree is simple, compact and direct such that robust partial understanding and task portability can be achieved more easily. However, many dependency parsers make hard decisions with local information while selecting among the next parse states. As a consequence, though the obtained dependency trees are good in some sense, the N-best output is not guaranteed to be globally optimal in general. In this paper, a stochastic dependency parsing scheme based on A* admissible search is formally presented. By well representing the parse state and appropriately designing the cost and heuristic functions, dependency parsing can be modeled as an A* search problem, and solved with a generic algorithm of state space search. When evaluated on the Chinese Tree Bank, this parser can obtain 85.99% dependency accuracy at 68.39% sentence accuracy, and 14.62% node ratio for dynamic heuristic. This parser can output N-best dependency trees, and integrate the semantic processing into the search process easily.","label_nlp4sg":0,"task":null,"method":null,"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2008,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} -{"ID":"deoskar-etal-2011-learning","url":"https:\/\/aclanthology.org\/W11-2911","title":"Learning Structural Dependencies of Words in the Zipfian Tail","abstract":"Using semi-supervised EM, we learn finegrained but sparse lexical parameters of a generative parsing model (a PCFG) initially estimated over the Penn Treebank. Our lexical parameters employ supertags, which encode complex structural information at the pre-terminal level, and are particularly sparse in labeled data-our goal is to learn these for words that are unseen or rare in the labeled data. In order to guide estimation from unlabeled data, we incorporate both structural and lexical priors from the labeled data. We get a large error reduction in parsing ambiguous structures associated with unseen verbs, the most important case of learning lexico-structural dependencies. We also obtain a statistically significant improvement in labeled bracketing score of the treebank PCFG, the first successful improvement via semi-supervised EM of a generative structured model already trained over large labeled data.","label_nlp4sg":0,"task":null,"method":null,"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"We thank Alexandra Birch, Mark Steedman, and three anonymous reviewers for detailed comments and suggestions. This research was supported by the VIDI grant 639.022.604 from The Netherlands Organisation for Scientific Research (NWO). The first author was further supported by the ERC Advanced Fellowship 249520 GRAMPLUS.","year":2011,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} -{"ID":"basili-etal-1992-computational","url":"https:\/\/aclanthology.org\/A92-1013","title":"Computational Lexicons: the Neat Examples and the Odd Exemplars","abstract":"When implementing computational lexicons it is important to keep in mind the texts that a NLP system must deal with. Words relate to each other in many different, often queer, ways: this information is rarely found in dictionaries, and it is quite hard to be invented a priori, despite the imagination that linguists exhibit at inventing esoteric examples. In this paper we present the results of an experiment in learning from corpora the frequent selectional restrictions holding between content words. The method is based on the analysis of word associations augmented with syntactic markers and semantic tags. Word pairs are extracted by a morphosyntactic analyzer and clustered according to their semantic tags. A statistical measure is applied to the data to evaluate the significance of a detected relation. Clustered association data render the study of word associations more interesting with several respects: data are more reliable even for smaller corpora, more easy to interpret, and have many practical applications in NLP.","label_nlp4sg":0,"task":null,"method":null,"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":1992,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} -{"ID":"yeh-lee-1992-lexicon","url":"https:\/\/aclanthology.org\/O92-1006","title":"A Lexicon-Driven Analysis Of Chinese Serial Verb Constructions","abstract":null,"label_nlp4sg":0,"task":null,"method":null,"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":1992,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} -{"ID":"lee-1995-unified","url":"https:\/\/aclanthology.org\/Y95-1037","title":"A Unified Account of Polarity Phenomena","abstract":"This paper argues, in an attempt at a unified account of negative polarity and free choice phenomena expressed by amu \/any or wh-indefinites in Korean, English, Chinese, and Japanese that the notion of concession by arbitrary or d isjunctive choice (based on indefiniteness) is crucial. With this central notion all the apparently diverse polarityrelated phenomena can be explained consistently, not just described in terms of distribution. With strong negatives and affective licensors, their negative force is so substantial that concessive force need not be reinforced and the licensed NPIs reveal existential force. With free choice and generic-like items, licensed by modals, weakly negative in their natrue of uncertainty\/irrealis, concessive force is reinforced and emphasized and the whole category denoted by the given Noun is reached in the process of concession by arbitrariy choice of its members on quantificational scale, giving the impression of universal force. The logical consequences of monotone decreasingness are transparent with strong negatives but less so with weaker ones.","label_nlp4sg":0,"task":null,"method":null,"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":1995,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} -{"ID":"radev-2000-common","url":"https:\/\/aclanthology.org\/W00-1009","title":"A Common Theory of Information Fusion from Multiple Text Sources Step One: Cross-Document Structure","abstract":"We introduce CST (cross-document slructure theory), a paradigm for multidocument analysis. CST takes into aceount the rhetorical structure of clusters of related textual documents. We present a taxonomy of cross-document relationships. We argue that CST can be the basis for multidocument summarization guided by user preferences for summary length, information provenance, cross-source agreement, and chronological ordering of facts.","label_nlp4sg":0,"task":null,"method":null,"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2000,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} -{"ID":"velupillai-2014-temporal","url":"https:\/\/aclanthology.org\/W14-3413","title":"Temporal Expressions in Swedish Medical Text -- A Pilot Study","abstract":"One of the most important features of health care is to be able to follow a patient's progress over time and identify events in a temporal order. We describe initial steps in creating resources for automatic temporal reasoning of Swedish medical text. As a first step, we focus on the identification of temporal expressions by exploiting existing resources and systems available for English. We adapt the HeidelTime system and manually evaluate its performance on a small subset of Swedish intensive care unit documents. On this subset, the adapted version of Hei-delTime achieves a precision of 92% and a recall of 66%. We also extract the most frequent temporal expressions from a separate, larger subset, and note that most expressions concern parts of days or specific times. We intend to further develop resources for temporal reasoning of Swedish medical text by creating a gold standard corpus also annotated with events and temporal links, in addition to temporal expressions and their normalised values.","label_nlp4sg":1,"task":null,"method":null,"goal1":"Good Health and Well-Being","goal2":null,"goal3":null,"acknowledgments":"The author wishes to thank the anonymous reviewers for invaluable comments on this manuscript. Thanks also to Danielle Mowery and Dr. Wendy Chapman for all their support. This work was partially funded by Swedish Research Council (350-2012-6658) and Swedish Fulbright Commission.","year":2014,"sdg1":0,"sdg2":0,"sdg3":1,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} -{"ID":"herzog-1969-computational","url":"https:\/\/aclanthology.org\/C69-6215","title":"Computational Studies in Terminology","abstract":"(Abstract of a Paper to be Presented at the 1969 International Congress on Computational Linguistics, SAnga S~by, Sweden) Terminology, as a field of applied linguistics, is gaining increasing importance, since in recent years striking new developments of technology and the sciences have taken place. Terminologists have their own international congresses; linguists and standard associations try to build up and control the specific vocabularies of all different fields, in order to have them compiled and printed in up-to-date dictionaries. Industry also shows remarkable interest in this work, because those great international companies heavily depend on the means of a fixed and standardized vocabulary in order to achieve the necessary communication (to go along with its products), either by publication or by translation.\nFor various reasons, the task of documenting and controlling the growth and structure of terminological vocabularies cannot satisfactorily be accomplished without the application of computers. Insight into the structure of terminologies has been gained by functional~ computer prepared statistics of vocabularies and validations of texts. Linguists, for their part, have programmed computers in order to isolate relevant lexical items from terminological texts~ as well as to determine the various meanings and shades of meaning of specific terms~ by means of special procedures.","label_nlp4sg":0,"task":null,"method":null,"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":1969,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} -{"ID":"bommadi-etal-2021-automatic","url":"https:\/\/aclanthology.org\/2021.dialdoc-1.4","title":"Automatic Learning Assistant in Telugu","abstract":"This paper presents a learning assistant that tests one's knowledge and gives feedback that helps a person learn at a faster pace. A learning assistant (based on an automated question generation) has extensive uses in education, information websites, self-assessment, FAQs, testing ML agents, research, etc. Multiple researchers, and companies have worked on Virtual Assistance, but majorly in English. We built our learning assistant for Telugu language to help with teaching in the mother tongue, which is the most efficient way of learning 1. Our system is built primarily based on Question Generation in Telugu.","label_nlp4sg":1,"task":null,"method":null,"goal1":"Quality Education","goal2":null,"goal3":null,"acknowledgments":null,"year":2021,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":1,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} -{"ID":"soler-wanner-2016-semi","url":"https:\/\/aclanthology.org\/L16-1204","title":"A Semi-Supervised Approach for Gender Identification","abstract":"In most of the research studies on Author Profiling, large quantities of correctly labeled data are used to train the models. However, this does not reflect the reality in forensic scenarios: in practical linguistic forensic investigations, the resources that are available to profile the author of a text are usually scarce. To pay tribute to this fact, we implemented a Semi-Supervised Learning variant of the k nearest neighbors algorithm that uses small sets of labeled data and a larger amount of unlabeled data to classify the authors of texts by gender (man vs woman). We describe the enriched KNN algorithm and show that the use of unlabeled instances improves the accuracy of our gender identification model. We also present a feature set that facilitates the use of a very small number of instances, reaching accuracies higher than 70% with only 113 instances to train the model. It is also shown that the algorithm performs equally well using publicly available data.","label_nlp4sg":0,"task":null,"method":null,"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"The presentation of this work was partially supported by the ICT PhD program of Universitat Pompeu Fabra through a travel grant.","year":2016,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} -{"ID":"kokkinakis-thurin-2007-identification","url":"https:\/\/aclanthology.org\/W07-2452","title":"Identification of Entity References in Hospital Discharge Letters","abstract":"In the era of the Electronic Health Record the release of medical narrative textual data for research, for health care statistics, for monitoring of new diagnostic tests and for tracking disease outbreak alerts imposes tough restrictions by various public authority bodies for the protection of (patient) privacy. In this paper we present a system for automatic identification of named entities in Swedish clinical free text, in the form of discharge letters, by applying generic named entity recognition technology with minor adaptations.","label_nlp4sg":1,"task":null,"method":null,"goal1":"Good Health and Well-Being","goal2":null,"goal3":null,"acknowledgments":"This work has been partially supported by the \"Semantic Interoperability and Data Mining in Biomedicine\" -NoE, under EU's Framework 6.","year":2007,"sdg1":0,"sdg2":0,"sdg3":1,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} -{"ID":"bernardy-chatzikyriakidis-2021-applied","url":"https:\/\/aclanthology.org\/2021.iwcs-1.2","title":"Applied Temporal Analysis: A Complete Run of the FraCaS Test Suite","abstract":"In this paper, we propose an implementation of temporal semantics that translates syntax trees to logical formulas, suitable for consumption by the Coq proof assistant. The analysis supports a wide range of phenomena including: temporal references, temporal adverbs, aspectual classes and progressives. The new semantics are built on top of a previous system handling all sections of the FraCaS test suite except the temporal reference section, and we obtain an accuracy of 81 percent overall and 73 percent for the problems explicitly marked as related to temporal reference. To the best of our knowledge, this is the best performance of a logical system on the whole of the FraCaS.","label_nlp4sg":0,"task":null,"method":null,"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"The research reported in this paper was supported by grant 2014-39 from the Swedish Research Council, which funds the Centre for Linguistic Theory and Studies in Probability (CLASP) in the Department of Philosophy, Linguistics, and Theory of Science at the University of Gothenburg. We are grateful to our colleagues in CLASP for helpful discussion of some of the ideas presented here. We also thank anonymous reviewers for their useful comments on an earlier draft of the paper.","year":2021,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} -{"ID":"dwi-prasetyo-etal-2015-impact","url":"https:\/\/aclanthology.org\/W15-2607","title":"On the Impact of Twitter-based Health Campaigns: A Cross-Country Analysis of Movember","abstract":"Health campaigns that aim to raise awareness and subsequently raise funds for research and treatment are commonplace. While many local campaigns exist, very few attract the attention of a global audience. One of those global campaigns is Movember, an annual campaign during the month of November, that is directed at men's health with special foci on cancer & mental health. Health campaigns routinely use social media portals to capture people's attention. Recently, researchers began to consider to what extent social media is effective in raising the awareness of health campaigns. In this paper we expand on those works by conducting an investigation across four different countries, while not only restricting ourselves to the impact on awareness but also on fund-raising. To that end, we analyze the 2013 Movember Twitter campaigns in Canada, Australia, the United Kingdom and the United States.","label_nlp4sg":1,"task":null,"method":null,"goal1":"Good Health and Well-Being","goal2":null,"goal3":null,"acknowledgments":"This research was funded in part by the 3TU Federation and the Dutch national projects COMMIT and FACT. We are grateful to Twitter and Movember for providing the data.","year":2015,"sdg1":0,"sdg2":0,"sdg3":1,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} -{"ID":"kim-etal-2020-multi","url":"https:\/\/aclanthology.org\/2020.coling-main.153","title":"Multi-Task Learning for Knowledge Graph Completion with Pre-trained Language Models","abstract":"As research on utilizing human knowledge in natural language processing has attracted considerable attention in recent years, knowledge graph (KG) completion has come into the spotlight. Recently, a new knowledge graph completion method using a pre-trained language model, such as KG-BERT, was presented and showed high performance. However, its scores in ranking metrics such as Hits@k are still behind state-of-the-art models. We claim that there are two main reasons: 1) failure in sufficiently learning relational information in knowledge graphs, and 2) difficulty in picking out the correct answer from lexically similar candidates. In this paper, we propose an effective multi-task learning method to overcome the limitations of previous works. By combining relation prediction and relevance ranking tasks with our target link prediction, the proposed model can learn more relational properties in KGs and properly perform even when lexical similarity occurs. Experimental results show that we not only largely improve the ranking performances compared to KG-BERT but also achieve the state-of-the-art performances in Mean Rank and Hits@10 on the WN18RR dataset.","label_nlp4sg":0,"task":null,"method":null,"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2020,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} -{"ID":"ahrendt-demberg-2016-improving","url":"https:\/\/aclanthology.org\/N16-1067","title":"Improving event prediction by representing script participants","abstract":"Automatically learning script knowledge has proved difficult, with previous work not or just barely beating a most-frequent baseline. Script knowledge is a type of world knowledge which can however be useful for various task in NLP and psycholinguistic modelling. We here propose a model that includes participant information (i.e., knowledge about which participants are relevant for a script) and show, on the Dinners from Hell corpus as well as the InScript corpus, that this knowledge helps us to significantly improve prediction performance on the narrative cloze task.","label_nlp4sg":0,"task":null,"method":null,"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"This research was funded by the German Research Foundation (DFG) as part of SFB 1102 'Information Density and Linguistic Encoding' and the Cluster of Excellence 'Multimodal Computing and Interaction' (EXC 284).","year":2016,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} -{"ID":"isonuma-etal-2020-tree","url":"https:\/\/aclanthology.org\/2020.acl-main.73","title":"Tree-Structured Neural Topic Model","abstract":"This paper presents a tree-structured neural topic model, which has a topic distribution over a tree with an infinite number of branches. Our model parameterizes an unbounded ancestral and fraternal topic distribution by applying doubly-recurrent neural networks. With the help of autoencoding variational Bayes, our model improves data scalability and achieves competitive performance when inducing latent topics and tree structures, as compared to a prior tree-structured topic model (Blei et al., 2010). This work extends the tree-structured topic model such that it can be incorporated with neural models for downstream tasks.","label_nlp4sg":0,"task":null,"method":null,"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"We would like to thank anonymous reviewers for their valuable feedback. This work was supported by JST ACT-X Grant Number JPMJAX1904 and CREST Grant Number JPMJCR1513, Japan.","year":2020,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} -{"ID":"rich-etal-2018-modeling","url":"https:\/\/aclanthology.org\/W18-0526","title":"Modeling Second-Language Learning from a Psychological Perspective","abstract":"Psychological research on learning and memory has tended to emphasize small-scale laboratory studies. However, large datasets of people using educational software provide opportunities to explore these issues from a new perspective. In this paper we describe our approach to the Duolingo Second Language Acquisition Modeling (SLAM) competition which was run in early 2018. We used a well-known class of algorithms (gradient boosted decision trees), with features partially informed by theories from the psychological literature. After detailing our modeling approach and a number of supplementary simulations, we reflect on the degree to which psychological theory aided the model, and the potential for cognitive science and predictive modeling competitions to gain from each other.","label_nlp4sg":1,"task":null,"method":null,"goal1":"Quality Education","goal2":null,"goal3":null,"acknowledgments":"This research was supported by NSF grant DRL-1631436 and BCS-1255538, and the John S. Mc-Donnell Foundation Scholar Award to TMG. We thank Shannon Tubridy and Tal Yarkoni for helpful suggestions in the development of this work.","year":2018,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":1,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} -{"ID":"stoyanova-etal-2013-wordnet","url":"https:\/\/aclanthology.org\/W13-2417","title":"Wordnet-Based Cross-Language Identification of Semantic Relations","abstract":"We propose a method for cross-language identification of semantic relations based on word similarity measurement and morphosemantic relations in WordNet. We transfer these relations to pairs of derivationally unrelated words and train a model for automatic classification of new instances of (morpho)semantic relations in context based on the existing ones and the general semantic classes of collocated verb and noun senses. Our experiments are based on Bulgarian-English parallel and comparable texts but the method is to a great extent language-independent and particularly suited to less-resourced languages, since it does not need parsed or semantically annotated data. The application of the method leads to an increase in the number of discovered semantic relations by 58.35% and performs relatively consistently, with a small decrease in precision between the baseline (based on morphosemantic relations identified in wordnet)-0.774, and the extended method (based on the data obtained through machine learning)-0.721.","label_nlp4sg":0,"task":null,"method":null,"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2013,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} -{"ID":"jin-de-marneffe-2015-overall","url":"https:\/\/aclanthology.org\/D15-1132","title":"The Overall Markedness of Discourse Relations","abstract":"Discourse relations can be categorized as continuous or discontinuous in the hypothesis of continuity (Murray, 1997), with continuous relations expressing normal succession of events in discourse such as temporal, spatial or causal. Asr and Demberg (2013) propose a markedness measure to test the prediction that discontinuous relations may have more unambiguous connectives, but restrict the markedness calculation to relations with explicit connectives only. This paper extends their measure to explicit and implicit relations and shows that results from this extension better fit the continuity hypothesis predictions both for the English Penn Discourse (Prasad et al., 2008) and the Chinese Discourse (Zhou and Xue, 2015) Treebanks.","label_nlp4sg":0,"task":null,"method":null,"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"We thank William Schuler for productive discussions of the work presented here as well as our anonymous reviewers for their helpful comments.","year":2015,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} -{"ID":"ma-etal-2019-essentia","url":"https:\/\/aclanthology.org\/D19-5307","title":"Essentia: Mining Domain-specific Paraphrases with Word-Alignment Graphs","abstract":"Paraphrases are important linguistic resources for a wide variety of NLP applications. Many techniques for automatic paraphrase mining from general corpora have been proposed. While these techniques are successful at discovering generic paraphrases, they often fail to identify domain-specific paraphrases (e.g., \"staff \", \"concierge\" in the hospitality domain). This is because current techniques are often based on statistical methods, while domain-specific corpora are too small to fit statistical methods. In this paper, we present an unsupervised graph-based technique to mine paraphrases from a small set of sentences that roughly share the same topic or intent. Our system, ESSENTIA, relies on word-alignment techniques to create a word-alignment graph that merges and organizes tokens from input sentences. The resulting graph is then used to generate candidate paraphrases. We demonstrate that our system obtains high quality paraphrases, as evaluated by crowd workers. We further show that the majority of the identified paraphrases are domain-specific and thus complement existing paraphrase databases.","label_nlp4sg":0,"task":null,"method":null,"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2019,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} -{"ID":"tanvir-etal-2021-estbert","url":"https:\/\/aclanthology.org\/2021.nodalida-main.2","title":"EstBERT: A Pretrained Language-Specific BERT for Estonian","abstract":"This paper presents EstBERT, a large pretrained transformer-based language-specific BERT model for Estonian. Recent work has evaluated multilingual BERT models on Estonian tasks and found them to outperform the baselines. Still, based on existing studies on other languages, a language-specific BERT model is expected to improve over the multilingual ones. We first describe the EstBERT pretraining process and then present the models' results based on the finetuned EstBERT for multiple NLP tasks, including POS and morphological tagging, dependency parsing, named entity recognition and text classification. The evaluation results show that the models based on EstBERT outperform multilingual BERT models on five tasks out of seven, providing further evidence towards a view that training language-specific BERT models are still useful, even when multilingual models are available. 1","label_nlp4sg":0,"task":null,"method":null,"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2021,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} -{"ID":"kallmeyer-yoon-2004-tree","url":"https:\/\/aclanthology.org\/2004.jeptalnrecital-long.24","title":"Tree-local MCTAG with Shared Nodes: An Analysis ofWord Order Variation in German and Korean","abstract":"Lexicalized Tree Adjoining Grammars (LTAG, (Joshi & Schabes, 1997) ) is a tree-rewriting formalism. An LTAG consists of a finite set of trees (elementary trees) associated with lexical items. Larger trees are derived by substitution (replacing a leaf with a new tree) and adjunction (replacing an internal node with a new tree). In case of an adjunction, the new elementary tree has a special leaf node, the foot node (marked with an asterisk). When adjoining such a tree (a so-called auxiliary tree) to a node \u00b5, in the resulting tree, the subtree with root node \u00b5 from the old tree is put below the foot node of the new auxiliary tree. Non-auxiliary elementary trees are called initial trees. LTAG elementary trees represent extended projections of lexical items and encapsulate all syntactic arguments of the lexical anchor. They are minimal in the sense that only the arguments of the anchor are encapsulated, all recursion is factored away.","label_nlp4sg":0,"task":null,"method":null,"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2004,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} -{"ID":"imamura-sumita-2020-transformer","url":"https:\/\/aclanthology.org\/2020.wat-1.3","title":"Transformer-based Double-token Bidirectional Autoregressive Decoding in Neural Machine Translation","abstract":"This paper presents a simple method that extends a standard Transformer-based autoregressive decoder, to speed up decoding. The proposed method generates a token from the head and tail of a sentence (two tokens in total) in each step. By simultaneously generating multiple tokens that rarely depend on each other, the decoding speed is increased while the degradation in translation quality is minimized. In our experiments, the proposed method increased the translation speed by around 113%-155% in comparison with a standard autoregressive decoder, while degrading the BLEU scores by no more than 1.03. It was faster than an iterative nonautoregressive decoder in many conditions.","label_nlp4sg":0,"task":null,"method":null,"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2020,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} -{"ID":"brugman-etal-2004-collaborative","url":"http:\/\/www.lrec-conf.org\/proceedings\/lrec2004\/pdf\/473.pdf","title":"Collaborative Annotation of Sign Language Data with Peer-to-Peer Technology","abstract":"Collaboration on annotation projects is in practice mostly done by people sharing the same room. However, several models for online cooperative annotation over the internet are possible. This paper explores and evaluates these, and reports on the use of peer-to-peer technology to extend a multimedia annotation tool (ELAN) with functions that support collaborative annotation.","label_nlp4sg":1,"task":null,"method":null,"goal1":"Reduced Inequalities","goal2":null,"goal3":null,"acknowledgments":null,"year":2004,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":1,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} -{"ID":"bhat-etal-2017-joining","url":"https:\/\/aclanthology.org\/E17-2052","title":"Joining Hands: Exploiting Monolingual Treebanks for Parsing of Code-mixing Data","abstract":"In this paper, we propose efficient and less resource-intensive strategies for parsing of code-mixed data. These strategies are not constrained by in-domain annotations, rather they leverage pre-existing monolingual annotated resources for training. We show that these methods can produce significantly better results as compared to an informed baseline. Besides, we also present a data set of 450 Hindi and English code-mixed tweets of Hindi multilingual speakers for evaluation. The data set is manually annotated with Universal Dependencies.","label_nlp4sg":0,"task":null,"method":null,"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2017,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} -{"ID":"chen-huang-2009-step","url":"https:\/\/aclanthology.org\/Y09-1001","title":"A Step toward Compositional Semantics: E-HowNet a Lexical Semantic Representation System","abstract":"The purpose of designing the lexical semantic representation model E-HowNet is for natural language understanding. E-HowNet is a frame-based entity-relation model extended from HowNet to define lexical senses and achieving compositional semantics. The followings are major extension features of E-HowNet to achieve the goal. a) Word senses (concepts) are defined by either primitives or any well-defined concepts and conceptual relations; b) A uniform sense representation model for content words, function words and phrases; c) Semantic relations are explicitly expressed; and d) Near-canonical representations for lexical senses and phrasal senses. We demonstrate the above features and show how coarse-grained semantic composition can be carried out under the framework of E-HowNet. Possible applications of E-HowNet are also suggested. We hope that the ultimate goal of natural language understanding will be accomplished after future improvement and evolution of the current E-HowNet.","label_nlp4sg":0,"task":null,"method":null,"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2009,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} -{"ID":"sasano-korhonen-2020-investigating","url":"https:\/\/aclanthology.org\/2020.acl-main.337","title":"Investigating Word-Class Distributions in Word Vector Spaces","abstract":"This paper presents an investigation on the distribution of word vectors belonging to a certain word class in a pre-trained word vector space. To this end, we made several assumptions about the distribution, modeled the distribution accordingly, and validated each assumption by comparing the goodness of each model. Specifically, we considered two types of word classes-the semantic class of direct objects of a verb and the semantic class in a thesaurus-and tried to build models that properly estimate how likely it is that a word in the vector space is a member of a given word class. Our results on selectional preference and WordNet datasets show that the centroid-based model will fail to achieve good enough performance, the geometry of the distribution and the existence of subgroups will have limited impact, and also the negative instances need to be considered for adequate modeling of the distribution. We further investigated the relationship between the scores calculated by each model and the degree of membership and found that discriminative learning-based models are best in finding the boundaries of a class, while models based on the offset between positive and negative instances perform best in determining the degree of membership.","label_nlp4sg":0,"task":null,"method":null,"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"This work was supported by JSPS KAKENHI Grant Number 16K16110 and 18H03286.","year":2020,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} -{"ID":"ogiso-etal-2012-unidic","url":"http:\/\/www.lrec-conf.org\/proceedings\/lrec2012\/pdf\/906_Paper.pdf","title":"UniDic for Early Middle Japanese: a Dictionary for Morphological Analysis of Classical Japanese","abstract":"In order to construct an annotated diachronic corpus of Japanese, we propose to create a new dictionary for morphological analysis of Early Middle Japanese (Classical Japanese) based on UniDic, a dictionary for Contemporary Japanese. Differences between the Early Middle Japanese and Contemporary Japanese, which prevent a na\u00efve adaptation of UniDic to Early Middle Japanese, are found at the levels of lexicon, morphology, grammar, orthography and pronunciation. In order to overcome these problems, we extended dictionary entries and created a training corpus of Early Middle Japanese to adapt UniDic for Contemporary Japanese to Early Middle Japanese. Experimental results show that the proposed UniDic-EMJ, a new dictionary for Early Middle Japanese, achieves as high accuracy (97%) as needed for the linguistic research on lexicon and grammar in Japanese classical text analysis.","label_nlp4sg":0,"task":null,"method":null,"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"This work is partially supported by the collaborative research project \"Study of the history of the Japanese language using statistics and machine-learning\" carried out at the National Institute for Japanese Language and Linguistics.","year":2012,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} -{"ID":"mcclelland-1987-parallel","url":"https:\/\/aclanthology.org\/T87-1016","title":"Parallel Distributed Processing and Role Assignment Constraints","abstract":"My work in natural language processing is based on the premise that it is not in general possible to recover the underlying representations of sentences wilhout considering semantic constraints on their possible case structures, it seems clear that people use these constraints to do several things: To assign constituents to the proper ease roles and attach then to the proper other constituents. To assign the appropriate reading to a word or larger constituent when it occurs in context. To assign default values to missing constituents. To instantiate the concepts referenced by the words in a sentence so that they fit the context. I believe that parallel-distributed processing models (i.e., conneelionist models which make use of distributed representations) provide the mechanisms that are needed for these lasks. Argument altachments and role assignments seem to require a consideration of the relative merits of competing possibilities (Marcus, 1980; Bates and MacWhinney, 1987; MaeWhinney, 1987), as deles lexical dlsambigualion. Conuectionist models provide a very natural substrate for these kinds of competition processes (Cottrell, 1985; Wallz and Pollack, 1985).","label_nlp4sg":0,"task":null,"method":null,"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":1987,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} -{"ID":"lucy-bamman-2021-gender","url":"https:\/\/aclanthology.org\/2021.nuse-1.5","title":"Gender and Representation Bias in GPT-3 Generated Stories","abstract":"Using topic modeling and lexicon-based word similarity, we find that stories generated by GPT-3 exhibit many known gender stereotypes. Generated stories depict different topics and descriptions depending on GPT-3's perceived gender of the character in a prompt, with feminine characters 1 more likely to be associated with family and appearance, and described as less powerful than masculine characters, even when associated with high power verbs in a prompt. Our study raises questions on how one can avoid unintended social biases when using large language models for storytelling.","label_nlp4sg":1,"task":null,"method":null,"goal1":"Gender Equality","goal2":"Reduced Inequalities","goal3":null,"acknowledgments":"We thank Nicholas Tomlin, Julia Mendelsohn, and Emma Lurie for their helpful feedback on earlier versions of this paper. This work was supported by funding from the National Science Foundation (Graduate Research Fellowship DGE-1752814 and grant IIS-1942591).","year":2021,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":1,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":1,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} -{"ID":"zhang-etal-2016-learning","url":"https:\/\/aclanthology.org\/P16-1169","title":"Learning Concept Taxonomies from Multi-modal Data","abstract":"We study the problem of automatically building hypernym taxonomies from textual and visual data. Previous works in taxonomy induction generally ignore the increasingly prominent visual data, which encode important perceptual semantics. Instead, we propose a probabilistic model for taxonomy induction by jointly leveraging text and images. To avoid hand-crafted feature engineering, we design end-to-end features based on distributed representations of images and words. The model is discriminatively trained given a small set of existing ontologies and is capable of building full taxonomies from scratch for a collection of unseen conceptual label items with associated images. We evaluate our model and features on the WordNet hierarchies, where our system outperforms previous approaches by a large gap.","label_nlp4sg":0,"task":null,"method":null,"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"We would like to thank anonymous reviewers for their valuable feedback. We would also like to thank Mohit Bansal for helpful suggestions. We thank NVIDIA for GPU donations. The work is supported by NSF Big Data IIS1447676.","year":2016,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} -{"ID":"bergsma-etal-2020-creating","url":"https:\/\/aclanthology.org\/2020.gamnlp-1.1","title":"Creating a Sentiment Lexicon with Game-Specific Words for Analyzing NPC Dialogue in The Elder Scrolls V: Skyrim","abstract":"A weak point of rule-based sentiment analysis systems is that the underlying sentiment lexicons are often not adapted to the domain of the text we want to analyze. We created a game-specific sentiment lexicon for video game Skyrim based on the E-ANEW word list and a dataset of Skyrim's in-game documents. We calculated sentiment ratings for NPC dialogue using both our lexicon and E-ANEW and compared the resulting sentiment ratings to those of human raters. Both lexicons perform comparably well on our evaluation dialogues, but the game-specific extension performs slightly better on the dominance dimension for dialogue segments and the arousal dimension for full dialogues. To our knowledge, this is the first time that a sentiment analysis lexicon has been adapted to the video game domain.","label_nlp4sg":0,"task":null,"method":null,"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"This research is partially supported by the Netherlands Organisation for Scientific Research (NWO) via the DATA2GAME project (project number 055.16.114).","year":2020,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} -{"ID":"hoffman-1993-formal","url":"https:\/\/aclanthology.org\/P93-1045","title":"The Formal Consequences of Using Variables in CCG Categories","abstract":"Combinatory Categorial Grammars, CCGs, (Steedman 1985) have been shown by Weir and loshi (1988) to generate the same class of languages as Tree-Adjoining Grammars (TAG), Head Grammars (HG), and Linear Indexed Grammars (LIG). In this paper, I will discuss the effect of using variables in lexical category assignments in CCGs. It will be shown that using variables in lexical categories can increase the weak generative capacity of CCGs beyond the class of grammars listed above.","label_nlp4sg":0,"task":null,"method":null,"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":1993,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} -{"ID":"hahn-wermter-2004-pumping","url":"http:\/\/www.lrec-conf.org\/proceedings\/lrec2004\/pdf\/641.pdf","title":"Pumping Documents Through a Domain and Genre Classification Pipeline","abstract":"We propose a simple, yet effective, pipeline architecture for document classification. The task we intend to solve is to classify large and content-wise heterogeneous document streams on a layered nine-category system, which distinguishes medical from non-medical texts and sorts medical texts into various subgenres. While the document classification problem is often dealt with using computationally powerful and, hence, costly classifiers (e.g., Bayesian ones), we have gathered empirical evidence that a much simpler approach based on n-gram-statistics achieves a comparable level of classification performance.","label_nlp4sg":0,"task":null,"method":null,"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"Acknowledgements. This work was supported by Deutsche Forschungsgemeinschaft (DFG), grant KL 640\/5-1, and by the Faculty of Medicine at Freiburg University, grant KLA231\/03.","year":2004,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} -{"ID":"litvinova-etal-2017-deception","url":"https:\/\/aclanthology.org\/E17-4005","title":"Deception detection in Russian texts","abstract":"Psychology studies show that people detect deception no more accurately than by chance, and it is therefore important to develop tools to enable the detection of deception. The problem of deception detection has been studied for a significant amount of time, however in the last 10-15 years we have seen methods of computational linguistics being employed with greater frequency. Texts are processed using different NLP tools and then classified as deceptive\/truthful using modern machine learning methods. While most of this research has been performed for the English language, Slavic languages have never been the focus of detection deception studies. This paper deals with deception detection in Russian narratives related to the theme \"How I Spent Yesterday\". It employs a specially designed corpus of truthful and deceptive texts on the same topic from each respondent, such that N = 113. The texts were processed using Linguistic Inquiry and Word Count software that is used in most studies of text-based deception detection. The average amount of parameters, a majority of which were related to Part-of-Speech, lexical-semantic group, and other frequencies. Using standard statistical analysis, statistically significant differences between false and truthful Russian texts was uncovered. On the basis of the chosen parameters our classifier reached an accuracy of 68.3%. The accuracy of the model was found to depend on the author's gender.","label_nlp4sg":1,"task":null,"method":null,"goal1":"Peace, Justice and Strong Institutions","goal2":null,"goal3":null,"acknowledgments":"This research is supported by a grant from the Russian Foundation for Basic Research, N 15-34-01221 Lie Detection in a Written Text: A Corpus Study.","year":2017,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":1,"sdg17":0} -{"ID":"minow-1969-metaprint","url":"https:\/\/aclanthology.org\/C69-7602","title":"Metaprint 3 (Metaprint 1) Responses to ``Computerized Linguistics: Half a Commentary''","abstract":"Responses to \"COMPUTERIZED LINGUISTICS: KALF A COMMENTARY\" -Martin Minow -Rather than attempt a summary of the replies to\"metaprint\" 1 included here, I feel it would be more useful for me to discuss one of my programs.\nThe program generates sentences from a generative (context-sensitive, transformational) grammar.","label_nlp4sg":0,"task":null,"method":null,"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":1969,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} -{"ID":"berglund-etal-2006-machine","url":"https:\/\/aclanthology.org\/E06-1049","title":"A Machine Learning Approach to Extract Temporal Information from Texts in Swedish and Generate Animated 3D Scenes","abstract":"Carsim is a program that automatically converts narratives into 3D scenes. Carsim considers authentic texts describing road accidents, generally collected from web sites of Swedish newspapers or transcribed from handwritten accounts by victims of accidents. One of the program's key features is that it animates the generated scene to visualize events.","label_nlp4sg":0,"task":null,"method":null,"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2006,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} -{"ID":"zetzsche-2014-invited","url":"https:\/\/aclanthology.org\/2014.eamt-1.1","title":"Invited Talk: Encountering the Unknown, Part 2","abstract":"The tasks that the translators were \"charged\" with were to look back at previous responses to technology, put into perspective what MT is in relation to other technologies, differentiate between different forms of MT, employ MT where appropriate, and embrace their whole identity.\nThe MT community was asked to acknowledge the origin of data and linguistic expertise it uses, communicate in terms that are down to earth and truthful, engage the translation community in meaningful ways, listen to the translation community, and embrace their whole identity.","label_nlp4sg":0,"task":null,"method":null,"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2014,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} -{"ID":"obrien-etal-2009-postediting","url":"https:\/\/aclanthology.org\/2009.mtsummit-tutorials.5","title":"Postediting Machine Translation Output Guidelines","abstract":null,"label_nlp4sg":0,"task":null,"method":null,"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2009,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} -{"ID":"dalrymple-etal-1990-modeling","url":"https:\/\/aclanthology.org\/C90-2013","title":"Modeling syntactic constraints on anaphoric binding","abstract":"Syntactic constraints on antecedent-anaphor relations can be stated within the theory of Lexical Functional Grammar (henceforth LFG) through the use of functional uncertainty (Kaplan and Maxwell 1988; Halvorsen and Kaplan 1988; Ksplan and Zaenen 1989). In the following, we summarize the general characteristics of syntactic constraints on anaphoric binding. Next, we describe a variation of functional uncertainty called inside-out functional uncertainty and show how it can be used to model anaphoric binding. Finally, we discuss some binding constraints claimed to hold in natural language to exemplify the mechanism. We limit our attention throughout to coreference possibilities between definite antecedents and anaphoric elements and ignore interactions with quantifiers. We also limit our discussion to intrasententiM relations.","label_nlp4sg":0,"task":null,"method":null,"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":1990,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} -{"ID":"ogrodniczuk-lenart-2012-web","url":"http:\/\/www.lrec-conf.org\/proceedings\/lrec2012\/pdf\/648_Paper.pdf","title":"Web Service integration platform for Polish linguistic resources","abstract":"This paper presents a robust linguistic Web service framework for Polish, combining several mature offline linguistic tools in a common online platform. The toolset comprise paragraph-, sentence-and token-level segmenter, morphological analyser, disambiguating tagger, shallow and deep parser, named entity recognizer and coreference resolver. Uniform access to processing results is provided by means of a stand-off packaged adaptation of National Corpus of Polish TEI P5-based representation and interchange format. A concept of asynchronous handling of requests sent to the implemented Web service (Multiservice) is introduced to enable processing large amounts of text by setting up language processing chains of desired complexity. Apart from a dedicated API, a simple Web interface to the service is presented, allowing to compose a chain of annotation services, run it and periodically check for execution results, made available as plain XML or in a simple visualization. Usage examples and results from performance and scalability tests are also included.","label_nlp4sg":0,"task":null,"method":null,"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"The work reported here was carried out within the Common Language Resources and Technology Infrastructure (CLARIN) project co-funded by the European Commission under the Seventh Framework Programme -Capacities Specific Programme Research Infrastructures (Grant Agreement No 212230) .","year":2012,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} -{"ID":"novello-callaway-2003-porting","url":"https:\/\/aclanthology.org\/W03-2310","title":"Porting to an Italian Surface Realizer: A Case Study","abstract":null,"label_nlp4sg":0,"task":null,"method":null,"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2003,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} -{"ID":"makarov-clematide-2020-cluzh","url":"https:\/\/aclanthology.org\/2020.sigmorphon-1.19","title":"CLUZH at SIGMORPHON 2020 Shared Task on Multilingual Grapheme-to-Phoneme Conversion","abstract":"This paper describes the submission by the team from the Institute of Computational Linguistics, Zurich University, to the Multilingual Grapheme-to-Phoneme Conversion (G2P) Task of the SIGMORPHON 2020 challenge. The submission adapts our system from the 2018 edition of the SIGMORPHON shared task. Our system is a neural transducer that operates over explicit edit actions and is trained with imitation learning. It is well-suited for morphological string transduction partly because it exploits the fact that the input and output character alphabets overlap. The challenge posed by G2P has been to adapt the model and the training procedure to work with disjoint alphabets. We adapt the model to use substitution edits and train it with a weighted finitestate transducer acting as the expert policy. An ensemble of such models produces competitive results on G2P. Our submission ranks second out of 23 submissions by a total of nine teams.","label_nlp4sg":0,"task":null,"method":null,"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"We would like to thank the organizers for their great effort in these turbulent times. We thank Kyle Gorman for taking the time to help us with our Unicode normalization problem. This work has been supported by the Swiss National Science Foundation under grant CR-SII5 173719.","year":2020,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} -{"ID":"bergmanis-goldwater-2017-segmentation","url":"https:\/\/aclanthology.org\/E17-1032","title":"From Segmentation to Analyses: a Probabilistic Model for Unsupervised Morphology Induction","abstract":"A major motivation for unsupervised morphological analysis is to reduce the sparse data problem in under-resourced languages. Most previous work focuses on segmenting surface forms into their constituent morphs (e.g., taking: tak +ing), but surface form segmentation does not solve the sparse data problem as the analyses of take and taking are not connected to each other. We extend the MorphoChains system (Narasimhan et al., 2015) to provide morphological analyses that can abstract over spelling differences in functionally similar morphs. These analyses are not required to use all the orthographic material of a word (stopping: stop +ing), nor are they limited to only that material (acidified: acid +ify +ed). On average across six typologically varied languages our system has a similar or better F-score on EMMA (a measure of underlying morpheme accuracy) than three strong baselines; moreover, the total number of distinct morphemes identified by our system is on average 12.8% lower than for Morfessor (Virpioja et al., 2013), a stateof-the-art surface segmentation system.","label_nlp4sg":0,"task":null,"method":null,"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2017,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} -{"ID":"boguraev-pustejovsky-1990-lexical","url":"https:\/\/aclanthology.org\/C90-2007","title":"Lexical Ambiguity and The Role of Knowledge Representation in Lexicon Design","abstract":"The traditional framework ['or ambiguity resolution employs only 'static' knowledge, expressed generally as selectional restrictions or domain specific constraints, and makes uo use of any specific knowledge manipulation mechanisms apart from the simple ability to match valences of structurally related words. In contraust, this paper suggests how a theory of lexical semantics making use of a knowledge representation framework offers a richer, more expressive vocabulary for lexical information. In particular, by performing specialized inference over the ways in which aspects of knowledge structures of words in context c~Ln be composed, mutually compatible and contextully relevant lexieal components of words and phrases are highlighted. In the view presented here, lexical ambiguity resolution is an integral part of the same procedure that creates the semantic interpretation of a sentence itself.","label_nlp4sg":0,"task":null,"method":null,"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":1990,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} -{"ID":"toral-etal-2014-extrinsic","url":"https:\/\/aclanthology.org\/2014.eamt-1.45","title":"Extrinsic evaluation of web-crawlers in machine translation: a study on Croatian-English for the tourism domain","abstract":"We present an extrinsic evaluation of crawlers of parallel corpora from multilingual web sites in machine translation (MT). Our case study is on Croatian to English translation in the tourism domain. Given two crawlers, we build phrase-based statistical MT systems on the datasets produced by each crawler using different settings. We also combine the best datasets produced by each crawler (union and intersection) to build additional MT systems. Finally we combine the best of the previous systems (union) with general-domain data. This last system outperforms all the previous systems built on crawled data as well as two baselines (a system built on general-domain data and a well known online MT system). * The research leading to these results has received funding from the European Union Seventh Framework Programme FP7\/2007-2013 under grant agreement PIAP-GA-2012-324414 (Abu-MaTran).","label_nlp4sg":0,"task":null,"method":null,"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2014,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} -{"ID":"virginie-etal-2014-database","url":"http:\/\/www.lrec-conf.org\/proceedings\/lrec2014\/pdf\/741_Paper.pdf","title":"A Database of Full Body Virtual Interactions Annotated with Expressivity Scores","abstract":"Recent technologies enable the exploitation of full body expressions in applications such as interactive arts but are still limited in terms of dyadic subtle interaction patterns. Our project aims at full body expressive interactions between a user and an autonomous virtual agent. The currently available databases do not contain full body expressivity and interaction patterns via avatars. In this paper, we describe a protocol defined to collect a database to study expressive full-body dyadic interactions. We detail the coding scheme for manually annotating the collected videos. Reliability measures for global annotations of expressivity and interaction are also provided.","label_nlp4sg":0,"task":null,"method":null,"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"Part of the work described in this paper was funded by the Agence Nationale de la Recherche (ANR): project INGREDIBLE, by the French Image and Networks Cluster (http:\/\/www.images-et-reseaux.com\/en), and by the Cap Digital Cluster (http:\/\/www.capdigital.com\/en\/)","year":2014,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} -{"ID":"bella-etal-2020-major","url":"https:\/\/aclanthology.org\/2020.lrec-1.342","title":"A Major Wordnet for a Minority Language: Scottish Gaelic","abstract":"We present a new wordnet resource for Scottish Gaelic, a Celtic minority language spoken by about 60,000 speakers, most of whom live in Northwestern Scotland. The wordnet contains over 15 thousand word senses and was constructed by merging ten thousand new, high-quality translations, provided and validated by language experts, with an existing wordnet derived from Wiktionary. This new, considerably extended wordnet-currently among the 30 largest in the world-targets multiple communities: language speakers and learners; linguists; computer scientists solving problems related to natural language processing. By publishing it as a freely downloadable resource, we hope to contribute to the long-term preservation of Scottish Gaelic as a living language, both offline and on the Web.","label_nlp4sg":0,"task":null,"method":null,"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"This research was funded by the University of Edinburgh through the DReaM Group EPSRC Platform Grant EP\/N014758\/1, as well as by the University of Trento through the InteropEHRate project. InteropEHRate is funded by the European Union's Horizon2020 Research and Innovation programme under grant agreement number 826106.","year":2020,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} -{"ID":"mcconnaughey-etal-2017-labeled","url":"https:\/\/aclanthology.org\/D17-1077","title":"The Labeled Segmentation of Printed Books","abstract":"We introduce the task of book structure labeling: segmenting and assigning a fixed category (such as TABLE OF CONTENTS, PREFACE, INDEX) to the document structure of printed books. We manually annotate the page-level structural categories for a large dataset totaling 294,816 pages in 1,055 books evenly sampled from 1750-1922, and present empirical results comparing the performance of several classes of models. The best-performing model, a bidirectional LSTM with rich features, achieves an overall accuracy of 95.8 and a class-balanced macro F-score of 71.4.","label_nlp4sg":0,"task":null,"method":null,"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"Many thanks to the anonymous reviewers and Hannah Alpert-Abrams and for their valuable feedback, and to the HathiTrust Research Center for their assistance in enabling this work. The research reported in this article was supported by a grant from the Digital Humanities at Berkeley initiative and resources provided by NVIDIA.","year":2017,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} -{"ID":"andy-etal-2021-understanding","url":"https:\/\/aclanthology.org\/2021.louhi-1.3","title":"Understanding Social Support Expressed in a COVID-19 Online Forum","abstract":"In online forums focused on health and wellbeing, individuals tend to seek and give the following social support: emotional and informational support. Understanding the expressions of these social supports in an online COVID-19 forum is important for: (a) the forum and its members to provide the right type of support to individuals and (b) determining the long term effects of the COVID-19 pandemic on the well-being of the public, thereby informing interventions. In this work, we build four machine learning models to measure the extent of the following social supports expressed in each post in a COVID-19 online forum: (a) emotional support given (b) emotional support sought (c) informational support given, and (d) informational support sought. Using these models, we aim to: (i) determine if there is a correlation between the different social supports expressed in posts e.g. when members of the forum give emotional support in posts, do they also tend to give or seek informational support in the same post? (ii) determine how these social supports sought and given changes over time in published posts. We find that (i) there is a positive correlation between the informational support given in posts and the emotional support given and emotional support sought, respectively, in these posts and (ii) over time, users tended to seek more emotional support and give less emotional support.\nGlobally, millions of individuals have contracted COVID-19 and more than 2 million people have died from the pandemic as of January 2021 1 . Individuals are turning to online forums focused on discussions around COVID-19 to seek and give support . In online health and well-being forums, individuals tend to seek and 1 https:\/\/coronavirus.jhu.edu\/map.html give two forms of social support: emotional and informational support (Wang et al., 2012; Yang et al., 2017) ; where: (a) emotional support sought seeks understanding, affirmation and encouragement, (b) emotional support given includes providing encouragement, (c) informational support sought seeks advice or information, and (d) informational support given provides advice and information. Below are examples (rephrased) of posts that express these social supports in a COVID-19 related online forum:","label_nlp4sg":1,"task":null,"method":null,"goal1":"Good Health and Well-Being","goal2":"Peace, Justice and Strong Institutions","goal3":null,"acknowledgments":null,"year":2021,"sdg1":0,"sdg2":0,"sdg3":1,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":1,"sdg17":0} -{"ID":"mamani-sanchez-etal-2010-exploiting","url":"https:\/\/aclanthology.org\/W10-3018","title":"Exploiting CCG Structures with Tree Kernels for Speculation Detection","abstract":"Our CoNLL-2010 speculative sentence detector disambiguates putative keywords based on the following considerations: a speculative keyword may be composed of one or more word tokens; a speculative sentence may have one or more speculative keywords; and if a sentence contains at least one real speculative keyword, it is deemed speculative. A tree kernel classifier is used to assess whether a potential speculative keyword conveys speculation. We exploit information implicit in tree structures. For prediction efficiency, only a segment of the whole tree around a speculation keyword is considered, along with morphological features inside the segment and information about the containing document. A maximum entropy classifier is used for sentences not covered by the tree kernel classifier. Experiments on the Wikipedia data set show that our system achieves 0.55 F-measure (in-domain).","label_nlp4sg":0,"task":null,"method":null,"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"This research is supported by the Trinity College Research Scholarship Program and the Science Foundation Ireland (Grant 07\/CE\/I1142) as part of the Centre for Next Generation Localisation (www.cngl.ie) at Trinity College of Dublin.","year":2010,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} -{"ID":"gardent-etal-1989-efficient","url":"https:\/\/aclanthology.org\/P89-1034","title":"Efficient Parsing for French","abstract":"Parsing with categorial grammars often leads to problems such as proliferating lexical ambiguity, spurious parses and overgeneration. This paper presents a parser for French developed on an unification based categorial grammar (FG) which avoids these problem s. This parser is a bottom-up c hart parser augmented with a heuristic eliminating spurious parses. The unicity and completeness of parsing are proved.","label_nlp4sg":0,"task":null,"method":null,"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":1989,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} -{"ID":"baum-etal-2010-disco","url":"http:\/\/www.lrec-conf.org\/proceedings\/lrec2010\/pdf\/355_Paper.pdf","title":"DiSCo - A German Evaluation Corpus for Challenging Problems in the Broadcast Domain","abstract":"Typical broadcast material contains not only studio-recorded texts read by trained speakers, but also spontaneous and dialect speech, debates with cross-talk, voice-overs, and on-site reports with difficult acoustic environments. Standard approaches to speech and speaker recognition usually deteriorate under such conditions. This paper reports on the design, construction, and experimental analysis of DiSCo, a German corpus for the evaluation of speech and speaker recognition on challenging material from the broadcast domain. One of the key requirements for the design of this corpus was a good coverage of different types of serious programmes beyond clean speech and planned speech broadcast news. Corpus annotation encompasses manual segmentation, an orthographic transcription, and labelling with speech mode, dialect, and noise type. We indicate typical use cases for the corpus by reporting results from ASR, speech search, and speaker recognition on the new corpus, thereby obtaining insights into the difficulty of audio recognition on the various classes.","label_nlp4sg":0,"task":null,"method":null,"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2010,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} -{"ID":"sproat-1990-application","url":"https:\/\/aclanthology.org\/O90-1010","title":"An application of statistical optimization with dynamic programming to phonemic-input-to-character conversion for Chinese","abstract":null,"label_nlp4sg":0,"task":null,"method":null,"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":1990,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} -{"ID":"makhija-etal-2020-hinglishnorm","url":"https:\/\/aclanthology.org\/2020.coling-industry.13","title":"hinglishNorm - A Corpus of Hindi-English Code Mixed Sentences for Text Normalization","abstract":"We present hinglishNorm-a human annotated corpus of Hindi-English code-mixed sentences for text normalization task. Each sentence in the corpus is aligned to its corresponding human annotated normalized form. To the best of our knowledge, there is no corpus of Hindi-English code-mixed sentences for text normalization task that is publicly available. Our work is the first attempt in this direction. The corpus contains 13494 segments annotated for text normalization. Further, we present baseline normalization results on this corpus. We obtain a Word Error Rate (WER) of 15.55, BiLingual Evaluation Understudy (BLEU) score of 71.2, and Metric for Evaluation of Translation with Explicit ORdering (METEOR) score of 0.50.","label_nlp4sg":0,"task":null,"method":null,"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2020,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} -{"ID":"akula-etal-2021-mind","url":"https:\/\/aclanthology.org\/2021.emnlp-main.516","title":"Mind the Context: The Impact of Contextualization in Neural Module Networks for Grounding Visual Referring Expressions","abstract":"Neural module networks (NMN) are a popular approach for grounding visual referring expressions. Prior implementations of NMN use pre-defined and fixed textual inputs in their module instantiation. This necessitates a large number of modules as they lack the ability to share weights and exploit associations between similar textual contexts (e.g. \"dark cube on the left\" vs. \"black cube on the left\"). In this work, we address these limitations and evaluate the impact of contextual clues in improving the performance of NMN models. First, we address the problem of fixed textual inputs by parameterizing the module arguments. This substantially reduce the number of modules in NMN by up to 75% without any loss in performance. Next we propose a method to contextualize our parameterized model to enhance the module's capacity in exploiting the visiolinguistic associations. Our model outperforms the state-of-the-art NMN model on CLEVR-Ref+ dataset with +8.1% improvement in accuracy on the single-referent test set and +4.3% on the full test set. Additionally, we demonstrate that contextualization provides +11.2% and +1.7% improvements in accuracy over prior NMN models on CLOSURE and NLVR2. We further evaluate the impact of our contextualization by constructing a contrast set for CLEVR-Ref+, which we call CC-Ref+. We significantly outperform the baselines by as much as +10.4% absolute accuracy on CC-Ref+, illustrating the generalization skills of our approach. Our dataset is publicly available at https:\/\/github.com\/ McGill-NLP\/contextual-nmn.","label_nlp4sg":0,"task":null,"method":null,"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"We would like to thank Joyce Chai, Runtao Liu, Chenxi Liu and Yutong Bai for helpful discussions. We are grateful to the anonymous reviewers for their useful feedback.","year":2021,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} -{"ID":"kishimoto-etal-2020-adapting","url":"https:\/\/aclanthology.org\/2020.lrec-1.145","title":"Adapting BERT to Implicit Discourse Relation Classification with a Focus on Discourse Connectives","abstract":"BERT, a neural network-based language model pre-trained on large corpora, is a breakthrough in natural language processing, significantly outperforming previous state-of-the-art models in numerous tasks. However, there have been few reports on its application to implicit discourse relation classification, and it is not clear how BERT is best adapted to the task. In this paper, we test three methods of adaptation. (1) We perform additional pre-training on text tailored to discourse classification. (2) In expectation of knowledge transfer from explicit discourse relations to implicit discourse relations, we add a task named explicit connective prediction at the additional pre-training step. (3) To exploit implicit connectives given by treebank annotators, we add a task named implicit connective prediction at the fine-tuning step. We demonstrate that these three techniques can be combined straightforwardly in a single training pipeline. Through comprehensive experiments, we found that the first and second techniques provide additional gain while the last one did not.","label_nlp4sg":0,"task":null,"method":null,"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2020,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} -{"ID":"schmidt-etal-1996-lean","url":"https:\/\/aclanthology.org\/C96-1049","title":"Lean Formalisms, Linguistic Theory and Applications. Grammar Development in ALEP.","abstract":"This paper describes results achieved in a project which addresses the issue of how the gap between unification-based grammars as a scientific concept and real world applications can be narrowed down 1. Application-oriented grammar development has to take into account the following parameters: Efficiency: The project chose a so called 'lean' formal ism, a term-encodable language providing efficient term unification, ALEP. Coverage: The project adopted a corpus-based approach. Completeness: All modules needed from text handling to semantics must be there. The paper reports on a text handling component, Two Level morphology, word structure, phrase structure, semantics and the interfaces between these components. Mainstream approach: The approach claims to be mainstream, very much indebted to HPSG, thus based on the currently most prominent and recent linguistic theory. The relation (and tension) between these parameters are described in this paper.","label_nlp4sg":0,"task":null,"method":null,"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":1996,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} -{"ID":"liu-ng-2012-character","url":"https:\/\/aclanthology.org\/P12-1097","title":"Character-Level Machine Translation Evaluation for Languages with Ambiguous Word Boundaries","abstract":"In this work, we introduce the TESLA-CELAB metric (Translation Evaluation of Sentences with Linear-programming-based Analysis-Character-level Evaluation for Languages with Ambiguous word Boundaries) for automatic machine translation evaluation. For languages such as Chinese where words usually have meaningful internal structure and word boundaries are often fuzzy, TESLA-CELAB acknowledges the advantage of character-level evaluation over word-level evaluation. By reformulating the problem in the linear programming framework, TESLA-CELAB addresses several drawbacks of the character-level metrics, in particular the modeling of synonyms spanning multiple characters. We show empirically that TESLA-CELAB significantly outperforms characterlevel BLEU in the English-Chinese translation evaluation tasks.","label_nlp4sg":0,"task":null,"method":null,"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"This research is supported by the Singapore National Research Foundation under its International Research Centre @ Singapore Funding Initiative and administered by the IDM Programme Office.","year":2012,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} -{"ID":"kapustin-kapustin-2019-modeling","url":"https:\/\/aclanthology.org\/W19-0604","title":"Modeling language constructs with fuzzy sets: some approaches, examples and interpretations","abstract":"We present and discuss a couple of approaches, including different types of projections, and some examples, discussing the use of fuzzy sets for modeling meaning of certain types of language constructs. We are mostly focusing on words other than adjectives and linguistic hedges as these categories are the most studied from before. We discuss logical and linguistic interpretations of membership functions. We argue that using fuzzy sets for modeling meaning of words and other natural language constructs, along with situations described with natural language is interesting both from purely linguistic perspective, and also as a meaning representation for problems of computational linguistics and natural language processing.","label_nlp4sg":0,"task":null,"method":null,"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"We thank Vadim Kimmelman and Csaba Veres for helpful discussions and comments. We thank anonymous reviewers for helpful feedback.","year":2019,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} -{"ID":"whitehead-etal-2018-incorporating","url":"https:\/\/aclanthology.org\/D18-1433","title":"Incorporating Background Knowledge into Video Description Generation","abstract":"Most previous efforts toward video captioning focus on generating generic descriptions, such as, \"A man is talking.\" We collect a news video dataset to generate enriched descriptions that include important background knowledge, such as named entities and related events, which allows the user to fully understand the video content. We develop an approach that uses video meta-data to retrieve topically related news documents for a video and extracts the events and named entities from these documents. Then, given the video as well as the extracted events and entities, we generate a description using a Knowledgeaware Video Description network. The model learns to incorporate entities found in the topically related documents into the description via an entity pointer network and the generation procedure is guided by the event and entity types from the topically related documents through a knowledge gate, which is a gating mechanism added to the model's decoder that takes a one-hot vector of these types. We evaluate our approach on the new dataset of news videos we have collected, establishing the first benchmark for this dataset as well as proposing a new metric to evaluate these descriptions.\nVideo captioning is a challenging task that seeks to automatically generate a natural language description of the content of a video. Many video captioning efforts focus on learning video representations that model the spatial and temporal dynamics of the videos Venugopalan et al., 2016; Yu et al., 2017) . Although the language generation component within this task is of great importance, less work has been done to enhance the contextual knowledge conveyed by the descriptions. The descriptions generated by previous methods tend to be \"generic\", describing only what is evidently visible and lacking specific knowledge, like named entities and event participants, as shown in Figure 1a . In many situations, however, generic descriptions are uninformative as they do not provide contextual knowledge. For example, in Figure 1b , details such as who is speaking or why they are speaking are imperative to truly understanding the video, since contextual knowledge gives the surrounding circumstances or cause of the depicted events. To address this problem, we collect a news video dataset, where each video is accompanied by meta-data (e.g., tags and date) and a natural language description of the content in, and\/or context around, the video. We create an approach to this task that is motivated by two observations. First, the video content alone is insufficient to generate the description. Named entities or specific events are necessary to identify the participants, location, and\/or cause of the video content. Although knowledge could potentially be mined from visual evidence (e.g., recognizing the location), training such a system is exceedingly diffi-cult (Tran et al., 2016) . Further, not all the knowledge necessary for the description may appear in the video. In Figure 2a , the video depicts much of the description content, but knowledge of the speaker (\"Carles Puigdemont\") is unavailable if limited to the visual evidence because the speaker never appears in the video, making it intractable to incorporate this knowledge into the description.","label_nlp4sg":0,"task":null,"method":null,"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"This work was supported by the U.S. DARPA AIDA Program No. FA8750-18-2-0014 and U.S. ARL NS-CTA No. W911NF-09-2-0053. The views and conclusions contained in this document are those of the authors and should not be interpreted as representing the official policies, either expressed or implied, of the U.S. Government. The U.S. Government is authorized to reproduce and distribute reprints for Government purposes notwithstanding any copyright notation here on.","year":2018,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} -{"ID":"fam-lepage-2018-tools","url":"https:\/\/aclanthology.org\/L18-1171","title":"Tools for The Production of Analogical Grids and a Resource of N-gram Analogical Grids in 11 Languages","abstract":"We release a Python module containing several tools to build analogical grids from words contained in a corpus. The module implements several previously presented algorithms. The tools are language-independent. This permits their use with any language and any writing system. We hope that the tools will ease research in morphology by allowing researchers to automatically obtain structured representations of the vocabulary contained in corpora or linguistic data. We also release analogical grids built on the vocabularies contained in 1,000 corresponding lines of the 11 different language versions of the Europarl corpus v.3. The grids were built on N-grams of different lengths, from words to 6-grams. We hope that the use of structured parallel data will foster research in comparative linguistics.","label_nlp4sg":0,"task":null,"method":null,"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2018,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} -{"ID":"almeida-costa-etal-2020-building","url":"https:\/\/aclanthology.org\/2020.coling-main.533","title":"Building The First English-Brazilian Portuguese Corpus for Automatic Post-Editing","abstract":"This paper introduces the first corpus for Automatic Post-Editing of English and a low-resource language, Brazilian Portuguese. The source English texts were extracted from the WebNLG corpus and automatically translated into Portuguese using a state-of-the-art industrial neural machine translator. Post-edits were then obtained in an experiment with native speakers of Brazilian Portuguese. To assess the quality of the corpus, we performed error analysis and computed complexity indicators measuring how difficult the APE task would be. We report preliminary results of Phrase-Based and Neural Machine Translation Models on this new corpus. Data and code publicly available in our repository. 1","label_nlp4sg":0,"task":null,"method":null,"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"This research was partially funded by the agencies CNPq, CAPES, and FAPEMIG. In particular, the researchers were supported by CNPQ grant No. 310630\/2017-7, CAPES Post doctoral grant No. 88887.508597\/2020-00, and FAPEMIG grant APQ-01.461-14. This work was also supported by projects MASWeb, EUBra-BIGSEA, INCT-CYBER, and ATMOSPHERE. The authors also wish to express their gratitude to Deepl for kindly granting a license to translate our corpus, and to the students at UFMG who took part in the post-editing experiment.","year":2020,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} -{"ID":"banik-etal-2012-natural","url":"https:\/\/aclanthology.org\/W12-1521","title":"Natural Language Generation for a Smart Biology Textbook","abstract":"In this demo paper we describe the natural language generation component of an electronic textbook application, called Inquire 1 . Inquire interacts with a knowledge base which encodes information from a biology textbook. The application includes a question-understanding module which allows students to ask questions about the contents of the book, and a questionanswering module which retrieves the corresponding answer from the knowledge base. The task of the natural language generation module is to present specific parts of the answer in English. Our current generation pipeline handles inputs that describe the biological functions of entities, the steps of biological processes, and the spatial relations between parts of entities. Our ultimate goal is to generate paragraphlength texts from arbitrary paths in the knowledge base. We describe here the natural language generation pipeline and demonstrate the inputs and generated texts. In the demo presentation we will show the textbook application and the knowledge base authoring environment, and provide an opportunity to interact with the system.","label_nlp4sg":1,"task":null,"method":null,"goal1":"Quality Education","goal2":null,"goal3":null,"acknowledgments":null,"year":2012,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":1,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} -{"ID":"blodgett-schneider-2019-improved","url":"https:\/\/aclanthology.org\/W19-0405","title":"An Improved Approach for Semantic Graph Composition with CCG","abstract":"This paper builds on previous work using Combinatory Categorial Grammar (CCG) to derive a transparent syntax-semantics interface for Abstract Meaning Representation (AMR) parsing. We define new semantics for the CCG combinators that is better suited to deriving AMR graphs. In particular, we define relation-wise alternatives for the application and composition combinators: these require that the two constituents being combined overlap in one AMR relation. We also provide a new semantics for type raising, which is necessary for certain constructions. Using these mechanisms, we suggest an analysis of eventive nouns, which present a challenge for deriving AMR graphs. Our theoretical analysis will facilitate future work on robust and transparent AMR parsing using CCG.","label_nlp4sg":0,"task":null,"method":null,"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"We want to thank Paul Portner, Adam Lopez, members of the NERT lab at Georgetown, and anonymous reviewers for their helpful feedback on this research, as well as Matthew Honnibal, Siva Reddy, and Mark Steedman for early discussions about light verbs in CCG.","year":2019,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} -{"ID":"hirschman-etal-2001-integrated","url":"https:\/\/aclanthology.org\/H01-1038","title":"Integrated Feasibility Experiment for Bio-Security: IFE-Bio, A TIDES Demonstration","abstract":"As part of MITRE's work under the DARPA TIDES (Translingual Information Detection, Extraction and Summarization) program, we are preparing a series of demonstrations to showcase the TIDES Integrated Feasibility Experiment on Bio-Security (IFE-Bio). The current demonstration illustrates some of the resources that can be made available to analysts tasked with monitoring infectious disease outbreaks and other biological threats.","label_nlp4sg":1,"task":null,"method":null,"goal1":"Good Health and Well-Being","goal2":null,"goal3":null,"acknowledgments":null,"year":2001,"sdg1":0,"sdg2":0,"sdg3":1,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} -{"ID":"yang-etal-2019-exploiting","url":"https:\/\/aclanthology.org\/N19-1325","title":"Exploiting Noisy Data in Distant Supervision Relation Classification","abstract":"Distant supervision has obtained great progress on relation classification task. However, it still suffers from noisy labeling problem. Different from previous works that underutilize noisy data which inherently characterize the property of classification, in this paper, we propose RCEND, a novel framework to enhance Relation Classification by Exploiting Noisy Data. First, an instance discriminator with reinforcement learning is designed to split the noisy data into correctly labeled data and incorrectly labeled data. Second, we learn a robust relation classifier in semi-supervised learning way, whereby the correctly and incorrectly labeled data are treated as labeled and unlabeled data respectively. The experimental results show that our method outperforms the state-of-the-art models.","label_nlp4sg":0,"task":null,"method":null,"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"We would like to express gratitude to Robert Ridley and the anonymous reviewers for their valuable feedback on the paper. This work is supported by the National Natural Science Foundation of China (No. 61672277, U1836221) , the Jiangsu Provincial Research Foundation for Basic Research (No. BK20170074).","year":2019,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} -{"ID":"fang-etal-2018-sounding","url":"https:\/\/aclanthology.org\/N18-5020","title":"Sounding Board: A User-Centric and Content-Driven Social Chatbot","abstract":"We present Sounding Board, a social chatbot that won the 2017 Amazon Alexa Prize. The system architecture consists of several components including spoken language processing, dialogue management, language generation, and content management, with emphasis on user-centric and content-driven design. We also share insights gained from large-scale online logs based on 160,000 conversations with real-world users.","label_nlp4sg":0,"task":null,"method":null,"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"In addition to the Alexa Prize financial and cloud computing support, this work was supported in part by NSF Graduate Research Fellowship (awarded to E. Clark), NSF (IIS-1524371), and DARPA CwC program through ARO (W911NF-15-1-0543). The conclusions and findings are those of the authors and do not necessarily reflect the views of sponsors.","year":2018,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} -{"ID":"lai-etal-2019-cuhk","url":"https:\/\/aclanthology.org\/K19-2010","title":"CUHK at MRP 2019: Transition-Based Parser with Cross-Framework Variable-Arity Resolve Action","abstract":"This paper describes our system (RE-SOLVER) submitted to the CoNLL 2019 shared task on Cross-Framework Meaning Representation Parsing (MRP). Our system implements a transition-based parser with a directed acyclic graph (DAG) to tree preprocessor and a novel cross-framework variable-arity resolve action that generalizes over five different representations. Although we ranked low in the competition, we have shown the current limitations and potentials of including variable-arity action in MRP and concluded with directions for improvements in the future.","label_nlp4sg":0,"task":null,"method":null,"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2019,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} -{"ID":"sproat-etal-2014-database","url":"http:\/\/www.lrec-conf.org\/proceedings\/lrec2014\/pdf\/47_Paper.pdf","title":"A Database for Measuring Linguistic Information Content","abstract":"Which languages convey the most information in a given amount of space? This is a question often asked of linguists, especially by engineers who often have some information theoretic measure of \"information\" in mind, but rarely define exactly how they would measure that information. The question is, in fact remarkably hard to answer, and many linguists consider it unanswerable. But it is a question that seems as if it ought to have an answer. If one had a database of close translations between a set of typologically diverse languages, with detailed marking of morphosyntactic and morphosemantic features, one could hope to quantify the differences between how these different languages convey information. Since no appropriate database exists we decided to construct one. The purpose of this paper is to present our work on the database, along with some preliminary results. We plan to release the dataset once complete.","label_nlp4sg":0,"task":null,"method":null,"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"We wish to thank the language experts who helped us with designing language-particular feature sets and annotating the data: Costanza Asnaghi, Elixabete Murguia Gomez, Zainab Hossainzadeh, Josie Li, Thomas Meyer, Fayeq Oweis, Tanya Scott. Thanks also to Daniel van Esch for helping arrange for some of the annotation work.","year":2014,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} -{"ID":"karan-etal-2013-frequently","url":"https:\/\/aclanthology.org\/W13-2405","title":"Frequently Asked Questions Retrieval for Croatian Based on Semantic Textual Similarity","abstract":"Frequently asked questions (FAQ) are an efficient way of communicating domainspecific information to the users. Unlike general purpose retrieval engines, FAQ retrieval engines have to address the lexical gap between the query and the usually short answer. In this paper we describe the design and evaluation of a FAQ retrieval engine for Croatian. We frame the task as a binary classification problem, and train a model to classify each FAQ as either relevant or not relevant for a given query. We use a variety of semantic textual similarity features, including term overlap and vector space features. We train and evaluate on a FAQ test collection built specifically for this purpose. Our best-performing model reaches 0.47 of mean reciprocal rank, i.e., on average ranks the relevant answer among the top two returned answers.","label_nlp4sg":0,"task":null,"method":null,"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"This work has been supported by the Ministry of Science, Education and Sports, Republic of Croatia under the Grant 036-1300646-1986. We thank the reviewers for their constructive comments.","year":2013,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} -{"ID":"marsh-1998-tipster","url":"https:\/\/aclanthology.org\/X98-1029","title":"TIPSTER Information Extraction Evaluation: The MUC-7 Workshop","abstract":"The last of the \"Message Understanding Conferences\", which were designed to evaluate text extraction systems, was held in April 1998 in Fairfax, Virginia. The workshop was co-chaired by Elaine Marsh and Ralph Grishman. A group of 18 organizations, both from the United States and abroad, participated in the evaluation.\nMUC-7 introduced a wider set of tasks with larger sets of training and formal data than previous MUCs. Results showed that while performance on the named entity and template elements task remains relatively high, additional research is still necessary for improved performance on more difficult tasks such as coreference resolution and domain-specific template generation from textual sources.","label_nlp4sg":0,"task":null,"method":null,"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":1998,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} -{"ID":"yang-etal-2016-chinese","url":"https:\/\/aclanthology.org\/W16-4920","title":"Chinese Grammatical Error Diagnosis Using Single Word Embedding","abstract":"Automatic grammatical error detection for Chinese has been a big challenge for NLP researchers. Due to the formal and strict grammar rules in Chinese, it is hard for foreign students to master Chinese. A computer-assisted learning tool which can automatically detect and correct Chinese grammatical errors is necessary for those foreign students. Some of the previous works have sought to identify Chinese grammatical errors using template-and learning-based methods. In contrast, this study introduced convolutional neural network (CNN) and long-short term memory (LSTM) for the shared task of Chinese Grammatical Error Diagnosis (CGED). Different from traditional word-based embedding, single word embedding was used as input of CNN and LSTM. The proposed single word embedding can capture both semantic and syntactic information to detect those four type grammatical error. In experimental evaluation, the recall and f1-score of our submitted results Run1 of the TOCFL testing data ranked the fourth place in all submissions in detection-level.","label_nlp4sg":0,"task":null,"method":null,"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"This work is supported by The Natural Science Foundation of Yunnan Province (Nos. 2013FB010).","year":2016,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} -{"ID":"raunak-etal-2020-dimensional","url":"https:\/\/aclanthology.org\/2020.repl4nlp-1.19","title":"On Dimensional Linguistic Properties of the Word Embedding Space","abstract":"Word embeddings have become a staple of several natural language processing tasks, yet much remains to be understood about their properties. In this work, we analyze word embeddings in terms of their principal components and arrive at a number of novel and counterintuitive observations. In particular, we characterize the utility of variance explained by the principal components as a proxy for downstream performance. Furthermore, through syntactic probing of the principal embedding space, we show that the syntactic information captured by a principal component does not correlate with the amount of variance it explains. Consequently, we investigate the limitations of variance based embedding post-processing, used in a few algorithms such as (Mu and Viswanath, 2018; Raunak et al., 2019) and demonstrate that such postprocessing is counter-productive in sentence classification and machine translation tasks. Finally, we offer a few precautionary guidelines on applying variance based embedding post-processing and explain why non-isotropic geometry might be integral to word embedding performance.","label_nlp4sg":0,"task":null,"method":null,"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2020,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} -{"ID":"ferret-2021-using","url":"https:\/\/aclanthology.org\/2021.paclic-1.20","title":"Using Distributional Principles for the Semantic Study of Contextual Language Models","abstract":"Many studies were recently done for investigating the properties of contextual language models but surprisingly, only a few of them consider the properties of these models in terms of semantic similarity. In this article, we first focus on these properties for English by exploiting the distributional principle of substitution as a probing mechanism in the controlled context of SemCor and WordNet paradigmatic relations. Then, we propose to adapt the same method to a more open setting for characterizing the differences between static and contextual language models.","label_nlp4sg":0,"task":null,"method":null,"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"This work has been partially funded by French National Research Agency (ANR) under project AD-DICTE (ANR-17-CE23-0001).","year":2021,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} -{"ID":"supnithi-etal-2010-autotagtcg","url":"http:\/\/www.lrec-conf.org\/proceedings\/lrec2010\/pdf\/868_Paper.pdf","title":"AutoTagTCG : A Framework for Automatic Thai CG Tagging","abstract":"Recently, categorical grammar has been focused as a powerful grammar. This paper aims to develop a framework for automatic CG tagging for Thai. We investigated two main algorithms, CRF and Statistical alignment model based on information theory (SAM). We found that SAM gives the best results both in word level and sentence level. We got the accuracy 89.25% in word level and 82.49% in sentence level. SAM is better than CRF in known word. On the other hand, CRF is better than SAM when we applied for unknown word. Combining both methods can be suited for both known and unknown word.","label_nlp4sg":0,"task":null,"method":null,"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2010,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} -{"ID":"sukhareva-etal-2017-distantly","url":"https:\/\/aclanthology.org\/W17-2213","title":"Distantly Supervised POS Tagging of Low-Resource Languages under Extreme Data Sparsity: The Case of Hittite","abstract":"This paper presents a statistical approach to automatic morphosyntactic annotation of Hittite transcripts. Hittite is an extinct Indo-European language using the cuneiform script. There are currently no morphosyntactic annotations available for Hittite, so we explored methods of distant supervision. The annotations were projected from parallel German translations of the Hittite texts. In order to reduce data sparsity, we applied stemming of German and Hittite texts. As there is no off-theshelf Hittite stemmer, a stemmer for Hittite was developed for this purpose. The resulting annotation projections were used to train a POS tagger, achieving an accuracy of 69% on a test sample. To our knowledge, this is the first attempt of statistical POS tagging of a cuneiform language.","label_nlp4sg":0,"task":null,"method":null,"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"The first and third author were supported by the German Federal Ministry of Education and Research (BMBF) under the promotional reference 01UG1416B (CEDIFOR).","year":2017,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} -{"ID":"ren-etal-2021-rocketqav2","url":"https:\/\/aclanthology.org\/2021.emnlp-main.224","title":"RocketQAv2: A Joint Training Method for Dense Passage Retrieval and Passage Re-ranking","abstract":"In various natural language processing tasks, passage retrieval and passage re-ranking are two key procedures in finding and ranking relevant information. Since both the two procedures contribute to the final performance, it is important to jointly optimize them in order to achieve mutual improvement. In this paper, we propose a novel joint training approach for dense passage retrieval and passage reranking. A major contribution is that we introduce the dynamic listwise distillation, where we design a unified listwise training approach for both the retriever and the re-ranker. During the dynamic distillation, the retriever and the re-ranker can be adaptively improved according to each other's relevance information. We also propose a hybrid data augmentation strategy to construct diverse training instances for listwise training approach. Extensive experiments show the effectiveness of our approach on both MSMARCO and Natural Questions datasets. Our code is available at https:\/\/ github.com\/PaddlePaddle\/RocketQA. * Equal contribution. The work was done when Ruiyang Ren was doing internship at Baidu.","label_nlp4sg":0,"task":null,"method":null,"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2021,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} -{"ID":"lee-etal-2016-call","url":"https:\/\/aclanthology.org\/P16-1093","title":"A CALL System for Learning Preposition Usage","abstract":"Fill-in-the-blank items are commonly featured in computer-assisted language learning (CALL) systems. An item displays a sentence with a blank, and often proposes a number of choices for filling it. These choices should include one correct answer and several plausible distractors. We describe a system that, given an English corpus, automatically generates distractors to produce items for preposition usage. We report a comprehensive evaluation on this system, involving both experts and learners. First, we analyze the difficulty levels of machine-generated carrier sentences and distractors, comparing several methods that exploit learner error and learner revision patterns. We show that the quality of machine-generated items approaches that of human-crafted ones. Further, we investigate the extent to which mismatched L1 between the user and the learner corpora affects the quality of distractors. Finally, we measure the system's impact on the user's language proficiency in both the short and the long term.","label_nlp4sg":0,"task":null,"method":null,"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"We thank NetDragon Websoft Holding Limited for their assistance with system evaluation, and the reviewers for their very helpful comments. This work was partially supported by an Applied Research Grant (Project no. 9667115) from City University of Hong Kong.","year":2016,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} -{"ID":"allen-frisch-1982-whats","url":"https:\/\/aclanthology.org\/P82-1004","title":"What's in a Semantic Network?","abstract":"Ever since Woods's \"What's in a Link\" paper, there has been a growing concern for formalization in the study of knowledge representation. Several arguments have been made that frame representation languages and semantic-network languages are syntactic variants of the ftrst-order predicate calculus (FOPC). The typical argument proceeds by showing how any given frame or network representation can be mapped to a logically isomorphic FOPC representation. For the past two years we have been studying the formalization of knowledge retrievers as well as the representation languages that they operate on. This paper presents a representation language in the notation of FOPC whose form facilitates the design of a semantic-network-like retriever.","label_nlp4sg":0,"task":null,"method":null,"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"This research was supported in part by the National Science Foundation under Grant IST-80-12418, and in part by the Office of Naval Research under Grant N00014-80-C-0197.","year":1982,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} -{"ID":"priban-steinberger-2021-multilingual","url":"https:\/\/aclanthology.org\/2021.ranlp-1.128","title":"Are the Multilingual Models Better? Improving Czech Sentiment with Transformers","abstract":"In this paper, we aim at improving Czech sentiment with transformer-based models and their multilingual versions. More concretely, we study the task of polarity detection for the Czech language on three sentiment polarity datasets. We fine-tune and perform experiments with five multilingual and three monolingual models. We compare the monolingual and multilingual models' performance, including comparison with the older approach based on recurrent neural networks. Furthermore, we test the multilingual models and their ability to transfer knowledge from English to Czech (and vice versa) with zero-shot cross-lingual classification. Our experiments show that the huge multilingual models can overcome the performance of the monolingual models. They are also able to detect polarity in another language without any training data, with performance not worse than 4.4 % compared to stateof-the-art monolingual trained models. Moreover, we achieved new state-of-the-art results on all three datasets.","label_nlp4sg":0,"task":null,"method":null,"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"This work has been partly supported by ERDF \"Research and Development of Intelligent Components of Advanced Technologies for the Pilsen Metropolitan Area (InteCom)\" (no.: CZ.02.1.01\/0.0\/0.0\/17 048\/0007267); and by Grant No. SGS-2019-018 Processing of heterogeneous data and its specialized applications. Computational resources were supplied by the project \"e-Infrastruktura CZ\" (e-","year":2021,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} -{"ID":"hajicova-2014-three","url":"http:\/\/www.lrec-conf.org\/proceedings\/lrec2014\/pdf\/39_Paper.pdf","title":"Three dimensions of the so-called ``interoperability'' of annotation schemes''","abstract":"Interoperability\" of annotation schemes is one of the key words in the discussions about annotation of corpora. In the present contribution, we propose to look at the so-called interoperability from (at least) three angles, namely (i) as a relation (and possible interaction or cooperation) of different annotation schemes for different layers or phenomena of a single language, (ii) the possibility to annotate different languages by a single (modified or not) annotation scheme, and (iii) the relation between different annotation schemes for a single language, or for a single phenomenon or layer of the same language. The pros and cons of each of these aspects are discussed as well as their contribution to linguistic studies and natural language processing. It is stressed that a communication and collaboration between different annotation schemes requires an explicit specification and consistency of each of the schemes.","label_nlp4sg":0,"task":null,"method":null,"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2014,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} -{"ID":"zavrel-daelemans-1997-memory","url":"https:\/\/aclanthology.org\/P97-1056","title":"Memory-Based Learning: Using Similarity for Smoothing","abstract":"This paper analyses the relation between the use of similarity in Memory-Based Learning and the notion of backed-off smoothing in statistical language modeling. We show that the two approaches are closely related, and we argue that feature weighting methods in the Memory-Based paradigm can offer the advantage of automatically specifying a suitable domainspecific hierarchy between most specific and most general conditioning information without the need for a large number of parameters. We report two applications of this approach: PP-attachment and POStagging. Our method achieves state-of-theart performance in both domains, and allows the easy integration of diverse information sources, such as rich lexical representations.","label_nlp4sg":0,"task":null,"method":null,"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"This research was done in the context of the \"Induction of Linguistic Knowledge\" research programme, partially supported by the Foundation for Language Speech and Logic (TSL), which is funded by the Netherlands Organization for Scientific Research (NWO). We would like to thank Peter Berck and Anders Green for their help with software for the experiments.","year":1997,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} -{"ID":"koehn-etal-2009-462","url":"https:\/\/aclanthology.org\/2009.mtsummit-papers.7","title":"462 Machine Translation Systems for Europe","abstract":"We built 462 machine translation systems for all language pairs of the Acquis Communautaire corpus. We report and analyse the performance of these system, and compare them against pivot translation and a number of system combination methods (multi-pivot, multisource) that are possible due to the available systems.","label_nlp4sg":0,"task":null,"method":null,"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2009,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} -{"ID":"lekhtman-etal-2021-dilbert","url":"https:\/\/aclanthology.org\/2021.emnlp-main.20","title":"DILBERT: Customized Pre-Training for Domain Adaptation with Category Shift, with an Application to Aspect Extraction","abstract":"The rise of pre-trained language models has yielded substantial progress in the vast majority of Natural Language Processing (NLP) tasks. However, a generic approach towards the pre-training procedure can naturally be sub-optimal in some cases. Particularly, finetuning a pre-trained language model on a source domain and then applying it to a different target domain, results in a sharp performance decline of the eventual classifier for many source-target domain pairs. Moreover, in some NLP tasks, the output categories substantially differ between domains, making adaptation even more challenging. This, for example, happens in the task of aspect extraction, where the aspects of interest of reviews of, e.g., restaurants or electronic devices may be very different. This paper presents a new fine-tuning scheme for BERT, which aims to address the above challenges. We name this scheme DILBERT: Domain Invariant Learning with BERT, and customize it for aspect extraction in the unsupervised domain adaptation setting. DILBERT harnesses the categorical information of both the source and the target domains to guide the pre-training process towards a more domain and category invariant representation, thus closing the gap between the domains. We show that DILBERT yields substantial improvements over state-ofthe-art baselines while using a fraction of the unlabeled data, particularly in more challenging domain adaptation setups. 1","label_nlp4sg":0,"task":null,"method":null,"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"We would like to thank the members of the IE@Technion NLP group for their valuable feedback and advice. This research was partially funded by an ISF personal grant No. 1625\/18.","year":2021,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} -{"ID":"habash-2008-four","url":"https:\/\/aclanthology.org\/P08-2015","title":"Four Techniques for Online Handling of Out-of-Vocabulary Words in Arabic-English Statistical Machine Translation","abstract":"We present four techniques for online handling of Out-of-Vocabulary words in Phrasebased Statistical Machine Translation. The techniques use spelling expansion, morphological expansion, dictionary term expansion and proper name transliteration to reuse or extend a phrase table. We compare the performance of these techniques and combine them. Our results show a consistent improvement over a state-of-the-art baseline in terms of BLEU and a manual error analysis.","label_nlp4sg":0,"task":null,"method":null,"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2008,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} -{"ID":"passban-etal-2018-improving","url":"https:\/\/aclanthology.org\/N18-1006","title":"Improving Character-Based Decoding Using Target-Side Morphological Information for Neural Machine Translation","abstract":"Recently, neural machine translation (NMT) has emerged as a powerful alternative to conventional statistical approaches. However, its performance drops considerably in the presence of morphologically rich languages (MRLs). Neural engines usually fail to tackle the large vocabulary and high out-of-vocabulary (OOV) word rate of MRLs. Therefore, it is not suitable to exploit existing word-based models to translate this set of languages. In this paper, we propose an extension to the state-of-the-art model of Chung et al. (2016), which works at the character level and boosts the decoder with target-side morphological information. In our architecture, an additional morphology table is plugged into the model. Each time the decoder samples from a target vocabulary, the table sends auxiliary signals from the most relevant affixes in order to enrich the decoder's current state and constrain it to provide better predictions. We evaluated our model to translate English into German, Russian, and Turkish as three MRLs and observed significant improvements.","label_nlp4sg":0,"task":null,"method":null,"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"We thank our anonymous reviewers for their valuable feedback, as well as the Irish centre for highend computing (www.ichec.ie) for providing computational infrastructures. This work has been supported by the ADAPT Centre for Digital Content Technology which is funded under the SFI Research Centres Programme (Grant 13\/RC\/2106) and is co-funded under the European Regional Development Fund.","year":2018,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} -{"ID":"ninomiya-etal-2002-indexing","url":"https:\/\/aclanthology.org\/C02-2024","title":"An Indexing Scheme for Typed Feature Structures","abstract":"This paper describes an indexing substrate for typed feature structures (ISTFS), which is an efficient retrieval engine for typed feature structures. Given a set of typed feature structures, the ISTFS efficiently retrieves its subset whose elements are unifiable or in a subsumption relation with a query feature structure. The efficiency of the ISTFS is achieved by calculating a unifiability checking table prior to retrieval and finding the best index paths dynamically. * This research is partially funded by JSPS Research Fellowship for Young Scientists. FSPAT H(\u03c0, F) = F PV (\u03c0) PV (\u03c0) = the least feature structure where path \u03c0 is defined That is, FollowedType(\u03c0, F) might be defined even if \u03c0 does not exist in F.","label_nlp4sg":0,"task":null,"method":null,"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2002,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} -{"ID":"becquin-2020-end","url":"https:\/\/aclanthology.org\/2020.nlposs-1.4","title":"End-to-end NLP Pipelines in Rust","abstract":"The recent progress in natural language processing research has been supported by the development of a rich open source ecosystem in Python. Libraries allowing NLP practitioners but also non-specialists to leverage stateof-the-art models have been instrumental in the democratization of this technology. The maturity of the open-source NLP ecosystem however varies between languages. This work proposes a new open-source library aimed at bringing state-of-the-art NLP to Rust. Rust is a systems programming language for which the foundations required to build machine learning applications are available but still lacks readyto-use, end-to-end NLP libraries. The proposed library, rust-bert, implements modern language models and ready-to-use pipelines (for example translation or summarization). This allows further development by the Rust community from both NLP experts and nonspecialists. It is hoped that this library will accelerate the development of the NLP ecosystem in Rust. The library is under active development and available at https:\/\/github. com\/guillaume-be\/rust-bert.","label_nlp4sg":0,"task":null,"method":null,"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2020,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} -{"ID":"agrawal-etal-2021-assessing","url":"https:\/\/aclanthology.org\/2021.naacl-main.91","title":"Assessing Reference-Free Peer Evaluation for Machine Translation","abstract":"Reference-free evaluation has the potential to make machine translation evaluation substantially more scalable, allowing us to pivot easily to new languages or domains. It has been recently shown that the probabilities given by a large, multilingual model can achieve state of the art results when used as a reference-free metric. We experiment with various modifications to this model, and demonstrate that by scaling it up we can match the performance of BLEU. We analyze various potential weaknesses of the approach, and find that it is surprisingly robust and likely to offer reasonable performance across a broad spectrum of domains and different system qualities.","label_nlp4sg":0,"task":null,"method":null,"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"We thank Julia Kreutzer, Ciprian Chelba, Aditya Siddhant, and the anonymous reviewers for their helpful and constructive comments.","year":2021,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} -{"ID":"kumar-etal-2020-nurse","url":"https:\/\/aclanthology.org\/2020.tacl-1.32","title":"Nurse is Closer to Woman than Surgeon? Mitigating Gender-Biased Proximities in Word Embeddings","abstract":"Word embeddings are the standard model for semantic and syntactic representations of words. Unfortunately, these models have been shown to exhibit undesirable word associations resulting from gender, racial, and religious biases. Existing post-processing methods for debiasing word embeddings are unable to mitigate gender bias hidden in the spatial arrangement of word vectors. In this paper, we propose RAN-Debias, a novel gender debiasing methodology that not only eliminates the bias present in a word vector but also alters the spatial distribution of its neighboring vectors, achieving a bias-free setting while maintaining minimal semantic offset. We also propose a new bias evaluation metric, Gender-based Illicit Proximity Estimate (GIPE), which measures the extent of undue proximity in word vectors resulting from the presence of gender-based predilections. Experiments based on a suite of evaluation metrics show that RAN-Debias significantly outperforms the state-of-the-art in reducing proximity bias (GIPE) by at least 42.02%. It also reduces direct bias, adding minimal semantic disturbance, and achieves the best performance in a downstream application task (coreference resolution).","label_nlp4sg":1,"task":null,"method":null,"goal1":"Gender Equality","goal2":null,"goal3":null,"acknowledgments":"The work was partially supported by the Ramanujan Fellowship, DST (ECR\/2017\/00l691). T. Chakraborty would like to acknowledge the support of the Infosys Center for AI, IIIT-Delhi.","year":2020,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":1,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} -{"ID":"saharia-etal-2009-part","url":"https:\/\/aclanthology.org\/P09-2009","title":"Part of Speech Tagger for Assamese Text","abstract":"Assamese is a morphologically rich, agglutinative and relatively free word order Indic language. Although spoken by nearly 30 million people, very little computational linguistic work has been done for this language. In this paper, we present our work on part of speech (POS) tagging for Assamese using the well-known Hidden Markov Model. Since no well-defined suitable tagset was available, we develop a tagset of 172 tags in consultation with experts in linguistics. For successful tagging, we examine relevant linguistic issues in Assamese. For unknown words, we perform simple morphological analysis to determine probable tags. Using a manually tagged corpus of about 10000 words for training, we obtain a tagging accuracy of nearly 87% for test inputs.","label_nlp4sg":0,"task":null,"method":null,"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2009,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} -{"ID":"xianwei-etal-2021-emotion","url":"https:\/\/aclanthology.org\/2021.ccl-1.82","title":"Emotion Classification of COVID-19 Chinese Microblogs based on the Emotion Category Description","abstract":"Emotion classification of COVID-19 Chinese microblogs helps analyze the public opinion triggered by COVID-19. Existing methods only consider the features of the microblog itself, without combining the semantics of emotion categories for modeling. Emotion classification of microblogs is a process of reading the content of microblogs and combining the semantics of emotion categories to understand whether it contains a certain emotion. Inspired by this, we propose an emotion classification model based on the emotion category description for COVID-19 Chinese microblogs. Firstly, we expand all emotion categories into formalized category descriptions. Secondly, based on the idea of question answering, we construct a question for each microblog in the form of 'What is the emotion expressed in the text X?' and regard all category descriptions as candidate answers. Finally, we construct a question-and-answer pair and use it as the input of the BERT model to complete emotion classification. By integrating rich contextual and category semantics, the model can better understand the emotion of microblogs. Experiments on the COVID-19 Chinese microblog dataset show that our approach outperforms many existing emotion classification methods, including the BERT baseline.","label_nlp4sg":1,"task":null,"method":null,"goal1":"Peace, Justice and Strong Institutions","goal2":null,"goal3":null,"acknowledgments":null,"year":2021,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":1,"sdg17":0} -{"ID":"gilbert-carl-2021-word","url":"https:\/\/aclanthology.org\/2021.motra-1.8","title":"Word Alignment Dissimilarity Indicator: Alignment Links as Conceptualizations of a Focused Bilingual Lexicon","abstract":"Starting from the assumption that different word alignments of translations represent differing conceptualizations of crosslingual equivalence, we assess the variation of six different alignment methods for English-to-Spanish translated and postedited texts. We develop a word alignment dissimilarity indicator (WADI) and compare it to traditional segment-based alignment error rate (AER). We average the WADI scores over the possible 15 different pairings of the six alignment methods for each source token and correlate the averaged WADI scores with translation process and product measures, including production duration, number of insertions, and word translation entropy. Results reveal modest correlations between WADI and production duration and insertions, as well as a moderate correlation between WADI and word translation entropy. This shows that differences in alignment decisions reflect on variation in translation decisions and demonstrates that aggregate WADI score could be used as a word-level feature to estimate post-editing difficulty.","label_nlp4sg":0,"task":null,"method":null,"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2021,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} -{"ID":"zhong-etal-2021-useradapter","url":"https:\/\/aclanthology.org\/2021.findings-acl.129","title":"UserAdapter: Few-Shot User Learning in Sentiment Analysis","abstract":"Adapting a model to a handful of personalized data is challenging, especially when it has gigantic parameters, such as a Transformerbased pretrained model. The standard way of fine-tuning all the parameters necessitates storing a huge model for each user. In this work, we introduce a lightweight approach dubbed UserAdapter, which clamps hundred millions of parameters of the Transformer model and optimizes a tiny user-specific vector. We take sentiment analysis as a test bed, and collect datasets of reviews from Yelp and IMDB respectively. Results show that, on both datasets, UserAdapter achieves better accuracy than the standard fine-tuned Transformerbased pre-trained model. More importantly, UserAdapter offers an efficient way to produce a personalized Transformer model with less than 0.5% parameters added for each user.","label_nlp4sg":0,"task":null,"method":null,"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"Wanjun Zhong, Jiahai Wang and Jian Yin are supported by the National Natural Science Foundation of China (U1711262, U1711261, U1811264, U1811261, U1911203 ,U2001211), Guangdong Basic and Applied Basic Research Foundation (2019B1515130001), Key R&D Program of Guangdong Province (2018B010107005). The corresponding author is Jian Yin.","year":2021,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} -{"ID":"goerz-beckstein-1983-parse","url":"https:\/\/aclanthology.org\/E83-1019","title":"How to Parse Gaps in Spoken Utterances","abstract":"We describe GLP, a chart parser that will be used as a SYNTAX module of the Erlangen Speech Understanding System. GLP realizes an agenda-based multiprocessing scheme, which allows easily to apply various parsing strategies in a transparent way. We discuss which features have been incorporated into the parser in order to process speech data, in particular the ability to perform direction independent island parsing, to handle gaps in the utterance and its hypothesis scoring scheme.","label_nlp4sg":0,"task":null,"method":null,"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":1983,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} -{"ID":"adiga-etal-2021-automatic","url":"https:\/\/aclanthology.org\/2021.findings-acl.447","title":"Automatic Speech Recognition in Sanskrit: A New Speech Corpus and Modelling Insights","abstract":"Automatic speech recognition (ASR) in Sanskrit is interesting, owing to the various linguistic peculiarities present in the language. The Sanskrit language is lexically productive, undergoes euphonic assimilation of phones at the word boundaries and exhibits variations in spelling conventions and in pronunciations. In this work, we propose the first large scale study of automatic speech recognition (ASR) in Sanskrit, with an emphasis on the impact of unit selection in Sanskrit ASR. In this work, we release a 78 hour ASR dataset for Sanskrit, which faithfully captures several of the linguistic characteristics expressed by the language. We investigate the role of different acoustic model and language model units in ASR systems for Sanskrit. We also propose a new modelling unit, inspired by the syllable level unit selection, that captures character sequences from one vowel in the word to the next vowel. We also highlight the importance of choosing graphemic representations for Sanskrit and show the impact of this choice on word error rates (WER). Finally, we extend these insights from Sanskrit ASR for building ASR systems in two other Indic languages, Gujarati and Telugu. For both these languages, our experimental results show that the use of phonetic based graphemic representations in ASR results in performance improvements as compared to ASR systems that use native scripts. 1 * Joint first author 1 Dataset and code can be accessed from www.cse.iitb.ac.in\/~asr and https:\/\/github. com\/cyfer0618\/Vaksanca.git.","label_nlp4sg":0,"task":null,"method":null,"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"We would like to thank Prof. K. Ramasubramanian, IIT Bombay, for supporting the creation of Sanskrit speech corpus. We express our gratitude to the volunteers who have participated in recording readings of classical Sanskrit texts and helping make this resource available for the purpose of research.","year":2021,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} -{"ID":"antunes-mendes-2014-evaluation","url":"http:\/\/www.lrec-conf.org\/proceedings\/lrec2014\/pdf\/1197_Paper.pdf","title":"An evaluation of the role of statistical measures and frequency for MWE identification","abstract":"We report on an experiment to evaluate the role of statistical association measures and frequency for the identification of MWE. We base our evaluation on a lexicon of 14.000 MWE comprising different types of word combinations: collocations, nominal compounds, light verbs + predicate, idioms, etc. These MWE were manually validated from a list of n-grams extracted from a 50 million word corpus of Portuguese (a subcorpus of the Reference Corpus of Contemporary Portuguese), using several criteria: syntactic fixedness, idiomaticity, frequency and Mutual Information measure, although no threshold was established, either in terms of group frequency or MI. We report on MWE that were selected on the basis of their syntactic and semantics properties while the MI or both the MI and the frequency show low values, which would constitute difficult cases to establish a cutting point. We analyze the MI values of the MWE selected in our gold dataset and, for some specific cases, compare these values with two other statistical measures.","label_nlp4sg":0,"task":null,"method":null,"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"This work was partially supported by national funds through FCT -Funda\u00e7\u00e3o para a Ci\u00eancia e Technologia, under project PEst-OE\/LIN\/UI0214\/2013. We would like to thank the anonymous reviewers for their helpful comments and suggestions.","year":2014,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} -{"ID":"caglayan-etal-2016-multimodality","url":"https:\/\/aclanthology.org\/W16-2358","title":"Does Multimodality Help Human and Machine for Translation and Image Captioning?","abstract":"This paper presents the systems developed by LIUM and CVC for the WMT16 Multimodal Machine Translation challenge. We explored various comparative methods, namely phrase-based systems and attentional recurrent neural networks models trained using monomodal or multimodal data. We also performed a human evaluation in order to estimate the usefulness of multimodal data for human machine translation and image description generation. Our systems obtained the best results for both tasks according to the automatic evaluation metrics BLEU and METEOR .","label_nlp4sg":0,"task":null,"method":null,"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"This work was supported by the Chist-ERA project M2CR 4 . We kindly thank KyungHyun Cho and Orhan Firat for providing the DL4MT tutorial as open source and Kelvin Xu for the arcticcaptions 5 system.","year":2016,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} -{"ID":"acs-2018-bme","url":"https:\/\/aclanthology.org\/K18-3016","title":"BME-HAS System for CoNLL--SIGMORPHON 2018 Shared Task: Universal Morphological Reinflection","abstract":"This paper presents an encoder-decoder neural network based solution for both subtasks of the CoNLL-SIGMORPHON 2018 Shared Task: Universal Morphological Reinflection. All of our models are sequence-to-sequence neural networks with multiple encoders and a single decoder.","label_nlp4sg":0,"task":null,"method":null,"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2018,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} -{"ID":"fissaha-haller-2003-application","url":"https:\/\/aclanthology.org\/2003.mtsummit-semit.7","title":"Application of corpus-based techniques to Amharic texts","abstract":"A number of corpus-based techniques have been used in the development of natural language processing application. One area in which these techniques have extensively been applied is lexical development. The current work is being undertaken in the context of a machine translation project in which lexical development activities constitute a significant portion of the overall task. In the first part, we applied corpus-based techniques to the extraction of collocations from Amharic text corpus. Analysis of the output reveals important collocations that can usefully be incorporated in the lexicon. This is especially true for the extraction of idiomatic expressions. The patterns of idiom formation which are observed in a small manually collected data enabled extraction of large set of idioms which otherwise may be difficult or impossible to recognize. Furthermore, preliminary results of other corpus-based techniques, that is, clustering and classification, that are currently being under investigation are presented. The results show that clustering performed no better than the frequency base line whereas classification showed a clear performance improvement over the frequency base line. This in turn suggests the need to carry out further experiments using large sets of data and more contextual information.","label_nlp4sg":0,"task":null,"method":null,"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2003,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} -{"ID":"yue-zhou-2020-phicon","url":"https:\/\/aclanthology.org\/2020.clinicalnlp-1.23","title":"PHICON: Improving Generalization of Clinical Text De-identification Models via Data Augmentation","abstract":"De-identification is the task of identifying protected health information (PHI) in the clinical text. Existing neural de-identification models often fail to generalize to a new dataset. We propose a simple yet effective data augmentation method PHICON to alleviate the generalization issue. PHICON consists of PHI augmentation and Context augmentation, which creates augmented training corpora by replacing PHI entities with named-entities sampled from external sources, and by changing background context with synonym replacement or random word insertion, respectively. Experimental results on the i2b2 2006 and 2014 deidentification challenge datasets show that PH-ICON can help three selected de-identification models boost F1-score (by at most 8.6%) on cross-dataset test. We also discuss how much augmentation to use and how each augmentation method influences the performance. 1 3 https:\/\/portal.dbmi.hms.harvard.edu\/ projects\/n2c2-nlp\/","label_nlp4sg":1,"task":null,"method":null,"goal1":"Good Health and Well-Being","goal2":"Peace, Justice and Strong Institutions","goal3":null,"acknowledgments":"We thank Prof. Kwong-Sak LEUNG and Sunny Lai in The Chinese University of Hong Kong as well as anonymous reviewers for their helpful comments.","year":2020,"sdg1":0,"sdg2":0,"sdg3":1,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":1,"sdg17":0} -{"ID":"lin-etal-2019-kcat","url":"https:\/\/aclanthology.org\/P19-3017","title":"KCAT: A Knowledge-Constraint Typing Annotation Tool","abstract":"Fine-grained Entity Typing is a tough task which suffers from noise samples extracted from distant supervision. Thousands of manually annotated samples can achieve greater performance than millions of samples generated by the previous distant supervision method. Whereas, it's hard for human beings to differentiate and memorize thousands of types, thus making large-scale human labeling hardly possible. In this paper, we introduce a Knowledge-Constraint Typing Annotation Tool (KCAT 1), which is efficient for fine-grained entity typing annotation. KCAT reduces the size of candidate types to an acceptable range for human beings through entity linking and provides a Multi-step Typing scheme to revise the entity linking result. Moreover, KCAT provides an efficient Annotator Client to accelerate the annotation process and a comprehensive Manager Module to analyse crowdsourcing annotations. Experiment shows that KCAT can significantly improve annotation efficiency, the time consumption increases slowly as the size of type set expands.","label_nlp4sg":0,"task":null,"method":null,"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2019,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} -{"ID":"reckman-etal-2011-extracting","url":"https:\/\/aclanthology.org\/W11-0126","title":"Extracting aspects of determiner meaning from dialogue in a virtual world environment","abstract":"We use data from a virtual world game for automated learning of words and grammatical constructions and their meanings. The language data are an integral part of the social interaction in the game and consist of chat dialogue, which is only constrained by the cultural context, as set by the nature of the provided virtual environment. Building on previous work, where we extracted a vocabulary for concrete objects in the game by making use of the non-linguistic context, we now target NP\/DP grammar, in particular determiners. We assume that we have captured the meanings of a set of determiners if we can predict which determiner will be used in a particular context. To this end we train a classifier that predicts the choice of a determiner on the basis of features from the linguistic and non-linguistic context. 'soup' 'vegetable soup' 'soup du jour' 'soup de jour' SALAD 'salad' 'cobb salad' SPAGHETTI 'spaghetti' 'spaghetti marinara' FILET 'steak' 'filet' 'filet mignon' SALMON 'salmon' 'grilled salmon' LOBSTER 'lobster' 'lobster thermador' CHEESECAKE 'cheesecake' 'cheese' 'cake' 'cherry cheesecake' 'cheese cake' PIE 'pie' 'berry pie' TART 'tart' 'nectarine tart' drink type referring expressions WATER 'water' TEA 'tea' COFFEE 'coffee' BEER 'beer' REDWINE 'red' 'wine' 'red wine' WHITEWINE 'white' 'white wine' item type referring expressions MENU 'menu' BILL 'bill' 'check'","label_nlp4sg":0,"task":null,"method":null,"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"This research was funded by a Rubicon grant from the Netherlands Organisation for Scientific Research (NWO), project nr. 446-09-011.","year":2011,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} -{"ID":"rao-etal-2021-stanker","url":"https:\/\/aclanthology.org\/2021.emnlp-main.269","title":"STANKER: Stacking Network based on Level-grained Attention-masked BERT for Rumor Detection on Social Media","abstract":"Rumor detection on social media puts pretrained language models (LMs), such as BERT, and auxiliary features, such as comments, into use. However, on the one hand, rumor detection datasets in Chinese companies with comments are rare; on the other hand, intensive interaction of attention on Transformer-based models like BERT may hinder performance improvement. To alleviate these problems, we build a new Chinese microblog dataset named Weibo20 1 by collecting posts and associated comments from Sina Weibo and propose a new ensemble named STANKER (Stacking neTwork bAsed-on atteNtion-masKed BERT). STANKER adopts two level-grained attentionmasked BERT (LGAM-BERT) models as base encoders. Unlike the original BERT, our new LGAM-BERT model takes comments as important auxiliary features and masks coattention between posts and comments on lower-layers. Experiments on Weibo20 and three existing social media datasets showed that STANKER outperformed all compared models, especially beating the old state-of-theart on Weibo dataset.","label_nlp4sg":1,"task":null,"method":null,"goal1":"Peace, Justice and Strong Institutions","goal2":null,"goal3":null,"acknowledgments":"This paper is supported by Guangdong Basic and Applied Basic Research Foundation, China (Grant No. 2021A1515012556).","year":2021,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":1,"sdg17":0} -{"ID":"nieto-pina-johansson-2016-embedding","url":"https:\/\/aclanthology.org\/W16-1401","title":"Embedding Senses for Efficient Graph-based Word Sense Disambiguation","abstract":"We propose a simple graph-based method for word sense disambiguation (WSD) where sense and context embeddings are constructed by applying the Skip-gram method to random walks over the sense graph. We used this method to build a WSD system for Swedish using the SALDO lexicon, and evaluated it on six different annotated test sets. In all cases, our system was several orders of magnitude faster than a state-of-the-art PageRank-based system, while outperforming a random baseline soundly.","label_nlp4sg":0,"task":null,"method":null,"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"This research was funded by the Swedish Research Council under grant 2013-4944.","year":2016,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} -{"ID":"kabbach-ribeyre-2016-valencer","url":"https:\/\/aclanthology.org\/C16-2033","title":"Valencer: an API to Query Valence Patterns in FrameNet","abstract":"This paper introduces Valencer: a RESTful API to search for annotated sentences matching a given combination of syntactic realizations of the arguments of a predicate-also called valence pattern-in the FrameNet database. The API takes as input an HTTP GET request specifying a valence pattern and outputs a list of exemplifying annotated sentences in JSON format. The API is designed to be modular and language-independent, and can therefore be easily integrated to other (NLP) server-side or client-side applications, as well as non-English FrameNet projects.","label_nlp4sg":0,"task":null,"method":null,"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2016,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} -{"ID":"li-etal-2021-future","url":"https:\/\/aclanthology.org\/2021.emnlp-main.422","title":"The Future is not One-dimensional: Complex Event Schema Induction by Graph Modeling for Event Prediction","abstract":"Event schemas encode knowledge of stereotypical structures of events and their connections. As events unfold, schemas are crucial to act as a scaffolding. Previous work on event schema induction focuses either on atomic events or linear temporal event sequences, ignoring the interplay between events via arguments and argument relations. We introduce a new concept of Temporal Complex Event Schema: a graph-based schema representation that encompasses events, arguments, temporal connections and argument relations. In addition, we propose a Temporal Event Graph Model that predicts event instances following the temporal complex event schema. To build and evaluate such schemas, we release a new schema learning corpus containing 6,399 documents accompanied with event graphs, and we have manually constructed gold-standard schemas. Intrinsic evaluations by schema matching and instance graph perplexity, prove the superior quality of our probabilistic graph schema library compared to linear representations. Extrinsic evaluation on schema-guided future event prediction further demonstrates the predictive power of our event graph model, significantly outperforming human schemas and baselines by more than 23.8% on","label_nlp4sg":0,"task":null,"method":null,"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"This research is based upon work supported by U.S. DARPA KAIROS Program Nos. FA8750-19-2-1004 and Air Force No. FA8650-17-C-7715. The views and conclusions contained herein are those of the authors and should not be interpreted as necessarily representing the official policies, either expressed or implied, of DARPA, or the U.S. Government. The U.S. Government is authorized to reproduce and distribute reprints for governmental purposes notwithstanding any copyright annotation therein.","year":2021,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} -{"ID":"havrylov-etal-2019-cooperative","url":"https:\/\/aclanthology.org\/N19-1115","title":"Cooperative Learning of Disjoint Syntax and Semantics","abstract":"There has been considerable attention devoted to models that learn to jointly infer an expression's syntactic structure and its semantics. Yet, Nangia and Bowman (2018) has recently shown that the current best systems fail to learn the correct parsing strategy on mathematical expressions generated from a simple context-free grammar. In this work, we present a recursive model inspired by Choi et al. (2018) that reaches near perfect accuracy on this task. Our model is composed of two separated modules for syntax and semantics. They are cooperatively trained with standard continuous and discrete optimisation schemes. Our model does not require any linguistic structure for supervision, and its recursive nature allows for out-of-domain generalisation. Additionally, our approach performs competitively on several natural language tasks, such as Natural Language Inference and Sentiment Analysis.","label_nlp4sg":0,"task":null,"method":null,"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"We would like to thank Alexander Koller, Ivan Titov, Wilker Aziz and anonymous reviewers for their helpful suggestions and comments.","year":2019,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} -{"ID":"bouma-1992-feature","url":"https:\/\/aclanthology.org\/J92-2003","title":"Feature Structures and Nonmonotonicity","abstract":"Unification-based grammar formalisms use feature structures to represent linguistic knowledge. The only operation defined on feature structures, unification, is information-combining and monotonic. Several authors have proposed nonmonotonic extensions of this formalism, as for a linguistically adequate description of certain natural language phenomena some kind of default reasoning seems essential. We argue that the effect of these proposals can be captured by means of one general, nonmonotonic, operation on feature structures, called default unification. We provide a formal semantics of the operation and demonstrate how some of the phenomena used to motivate nonmonotonic extensions of unification-based formalisms can be handled.","label_nlp4sg":0,"task":null,"method":null,"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"A syntactic approach to default unification is presented in Bouma (1990) . The reactions on that paper made it clear to me that default unification should be defined not only for feature structure descriptions, but also for feature structures themselves. For helpful questions, suggestions, and comments on the material presented here, I would like to thank Bob Carpenter, John Nerbonne, audiences in Tilburg, Groningen, Tiibingen, and Dhsseldorf, and three anonymous CL reviewers.","year":1992,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} -{"ID":"etchegoyhen-gete-2020-handle","url":"https:\/\/aclanthology.org\/2020.lrec-1.469","title":"Handle with Care: A Case Study in Comparable Corpora Exploitation for Neural Machine Translation","abstract":"We present the results of a case study in the exploitation of comparable corpora for Neural Machine Translation. A large comparable corpus for Basque-Spanish was prepared, on the basis of independently-produced news by the Basque public broadcaster , and we discuss the impact of various techniques to exploit the original data in order to determine optimal variants of the corpus. In particular, we show that filtering in terms of alignment thresholds and length-difference outliers has a significant impact on translation quality. The impact of tags identifying comparable data in the training datasets is also evaluated, with results indicating that this technique might be useful to help the models discriminate noisy information, in the form of informational imbalance between aligned sentences. The final corpus was prepared according to the experimental results and is made available to the scientific community for research purposes.","label_nlp4sg":0,"task":null,"method":null,"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"This work was supported by the Department of Economic Development and Competitiveness of the Basque Government, via the and projects. We wish to thank the Basque public broadcasting organisation for their support and their willingness to share the corpus with the community.","year":2020,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} -{"ID":"lewis-etal-2017-integrating","url":"https:\/\/aclanthology.org\/W17-1607","title":"Integrating the Management of Personal Data Protection and Open Science with Research Ethics","abstract":"This paper examines the impact of the EU General Data Protection Regulation, in the context of the requirement from many research funders to provide open access research data, on current practices in Language Technology Research. We analyse the challenges that arise and the opportunities to address many of them through the use of existing open data practices for sharing language research data. We discuss the impact of this also on current practice in academic and industrial research ethics.","label_nlp4sg":1,"task":null,"method":null,"goal1":"Peace, Justice and Strong Institutions","goal2":null,"goal3":null,"acknowledgments":"Supported by the ADAPT Centre for Digital Content Technology which is funded under the SFI Research Centres Programme (Grant 13\/RC\/2106) and is co-funded under the European Regional Development Fund.","year":2017,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":1,"sdg17":0} -{"ID":"bird-klein-1994-phonological","url":"https:\/\/aclanthology.org\/J94-3010","title":"Phonological Analysis in Typed Feature Systems","abstract":"Research on constraint-based grammar frameworks has focused on syntax and semantics largely to the exclusion of phonology. Likewise, current developments in phonology have generally ignored the technical and linguistic innovations available in these frameworks. In this paper we suggest some strategies for reuniting phonology and the rest of grammar in the context of a uniform constraint formalism. We explain why this is a desirable goal, and we present some conservative extensions to current practice in computational linguistics and in nonlinear phonology that we believe are necessary and sufficient for achieving this goal. We begin by exploring the application of typed feature logic to phonology and propose a system of prosodic types. Next, taking HPSG as an exemplar of the grammar frameworks we have in mind, we show how the phonology attribute can be enriched so that it can encode multi-tiered, hierarchical phonological representations. Finally, we exemplify the approach in some detail for the nonconcatenative morphology of Sierra Miwok and for schwa alternation in French. The approach taken in this paper lends itself particularly well to capturing phonological generalizations in terms of high-level prosodic constraints.","label_nlp4sg":0,"task":null,"method":null,"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"This research is funded by the U.K. Science and Engineering Research Council, under grant GR\/G-22084 Computational Phonology: A Constraint-Based Approach, and has been carried out as part of the research program","year":1994,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} -{"ID":"brodda-1994-automatic","url":"https:\/\/aclanthology.org\/W93-0404","title":"Automatic Tagging of Turns in the London-Lund Corpus with Respect to Type of Turn","abstract":"B en n y B ro d d a S tock h olm 0. A b stra ct. In this paper a fully automatic tagging system for the dialogue texts in the London-Lund corpus, LLC, will be presented. The units that receive tags are \"turns\"; a collection of (not necessarily connected) tone units-the basic record in the corpus-that one speaker produces while being either the \"floor holder\" or the \"listener\"; the quoted concepts are defined below. The tags constitute a classification of each turn according to \"type of turn\". A little sample of tagged text appears in Appendix 1, and is commented on in the text. The texts to be tagged will in the end comprise all the texts in the three subcorpora of LLC appearing in Svartvik & Quirk, \"A Corpus of English Conversation\", (=CEC); so far, about half of these texts have been tagged, now with the programs working properly, the rest will hopefully be tagged before the end of this year.","label_nlp4sg":0,"task":null,"method":null,"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":1994,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} -{"ID":"islamaj-dogan-etal-2017-biocreative","url":"https:\/\/aclanthology.org\/W17-2321","title":"BioCreative VI Precision Medicine Track: creating a training corpus for mining protein-protein interactions affected by mutations","abstract":"The Precision Medicine Track in BioCreative VI aims to bring together the BioNLP community for a novel challenge focused on mining the biomedical literature in search of mutations and protein-protein interactions (PPI). In order to support this track with an effective training dataset with limited curator time, the track organizers carefully reviewed PubMed articles from two different sources: curated public PPI databases, and the results of state-of-the-art public text mining tools. We detail here the data collection, manual review and annotation process and describe this training corpus characteristics. We also describe a corpus performance baseline. This analysis will provide useful information to developers and researchers for comparing and developing innovative text mining approaches for the BioCreative VI challenge and other Precision Medicine related applications.","label_nlp4sg":1,"task":null,"method":null,"goal1":"Good Health and Well-Being","goal2":null,"goal3":null,"acknowledgments":null,"year":2017,"sdg1":0,"sdg2":0,"sdg3":1,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} -{"ID":"thomas-etal-1998-extracting","url":"https:\/\/aclanthology.org\/W98-1222","title":"Extracting Phoneme Pronunciation Information from Corpora","abstract":"We present a procedure that determines a set of phonemes possibly intended by a speaker from a recognized or uttered phone. This information will be used to allow a speech recognizer to take pronunciation into account or to consider input from a noisy source during lexical access. We investigate the hypothesis that different pronunciations of a phone occur within groups of sounds physically produced the same way, and use the Minimum Message Length principle to consider the effect of a phoneme's context on its pronunciation.","label_nlp4sg":0,"task":null,"method":null,"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"The authors thank Jon Oliver and Chris Wallace for their advice on MML encoding.","year":1998,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} -{"ID":"mellish-1989-chart","url":"https:\/\/aclanthology.org\/P89-1013","title":"Some Chart-Based Techniques for Parsing Ill-Formed Input","abstract":"We argue for the usefulness of an active chart as the basis of a system that searches for the globally most plausible explanation of failure to syntactically parse a given input. We suggest semantics-free, grammarindependent techniques for parsing inputs displaying simple kinds of ill-formedness and discuss the search issues involved.","label_nlp4sg":0,"task":null,"method":null,"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"This work was done in conjunction with the SERC-supported project GR\/D\/16130. I am currently supported by an SERC Advanced Fellowship.","year":1989,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} -{"ID":"graca-2018-unbabel","url":"https:\/\/aclanthology.org\/W18-2103","title":"Unbabel: How to combine AI with the crowd to scale professional-quality translation","abstract":"Proceedings for AMTA 2018 Workshop: Translation Quality Estimation and Automatic Post-Editing Boston, March 21, 2018 | Page 44\nProceedings for AMTA 2018 Workshop: Translation Quality Estimation and Automatic Post-Editing Boston, March 21, 2018 | Page 45","label_nlp4sg":0,"task":null,"method":null,"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2018,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} -{"ID":"stenger-etal-2020-incomslav","url":"https:\/\/aclanthology.org\/2020.cllrd-1.6","title":"The INCOMSLAV Platform: Experimental Website with Integrated Methods for Measuring Linguistic Distances and Asymmetries in Receptive Multilingualism","abstract":"We report on a web-based resource for conducting intercomprehension experiments with native speakers of Slavic languages and present our methods for measuring linguistic distances and asymmetries in receptive multilingualism. Through a website which serves as a platform for online testing, a large number of participants with different linguistic backgrounds can be targeted. A statistical language model is used to measure information density and to gauge how language users master various degrees of (un)intelligibilty. The key idea is that intercomprehension should be better when the model adapted for understanding the unknown language exhibits relatively low average distance and surprisal. All obtained intelligibility scores together with distance and asymmetry measures for the different language pairs and processing directions are made available as an integrated online resource in the form of a Slavic intercomprehension matrix (SlavMatrix).","label_nlp4sg":0,"task":null,"method":null,"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"We wish to thank Hasan Alam for his support in the implementation of the SlavMatrix. This work has been funded by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) -Project-ID 232722074 -SFB 1102.","year":2020,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} -{"ID":"kaji-kitsuregawa-2007-building","url":"https:\/\/aclanthology.org\/D07-1115","title":"Building Lexicon for Sentiment Analysis from Massive Collection of HTML Documents","abstract":"Recognizing polarity requires a list of polar words and phrases. For the purpose of building such lexicon automatically, a lot of studies have investigated (semi-) unsupervised method of learning polarity of words and phrases. In this paper, we explore to use structural clues that can extract polar sentences from Japanese HTML documents, and build lexicon from the extracted polar sentences. The key idea is to develop the structural clues so that it achieves extremely high precision at the cost of recall. In order to compensate for the low recall, we used massive collection of HTML documents. Thus, we could prepare enough polar sentence corpus.","label_nlp4sg":0,"task":null,"method":null,"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2007,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} -{"ID":"bouscarrat-etal-2021-amu","url":"https:\/\/aclanthology.org\/2021.case-1.21","title":"AMU-EURANOVA at CASE 2021 Task 1: Assessing the stability of multilingual BERT","abstract":"This paper explains our participation in task 1 of the CASE 2021 shared task. This task is about multilingual event extraction from news. We focused on sub-task 4, event information extraction. This sub-task has a small training dataset and we fine-tuned a multilingual BERT to solve this sub-task. We studied the instability problem on the dataset and tried to mitigate it.","label_nlp4sg":0,"task":null,"method":null,"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"We thank Damien Fourrure, Arnaud Jacques, Guillaume Stempfel and our anonymous reviewers for their helpful comments.","year":2021,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} -{"ID":"zhu-etal-2020-crosswoz","url":"https:\/\/aclanthology.org\/2020.tacl-1.19","title":"CrossWOZ: A Large-Scale Chinese Cross-Domain Task-Oriented Dialogue Dataset","abstract":"To advance multi-domain (cross-domain) dialogue modeling as well as alleviate the shortage of Chinese task-oriented datasets, we propose CrossWOZ, the first large-scale Chinese Cross-Domain Wizard-of-Oz task-oriented dataset. It contains 6K dialogue sessions and 102K utterances for 5 domains, including hotel, restaurant, attraction, metro, and taxi. Moreover, the corpus contains rich annotation of dialogue states and dialogue acts on both user and system sides. About 60% of the dialogues have cross-domain user goals that favor inter-domain dependency and encourage natural transition across domains in conversation. We also provide a user simulator and several benchmark models for pipelined taskoriented dialogue systems, which will facilitate researchers to compare and evaluate their models on this corpus. The large size and rich annotation of CrossWOZ make it suitable to investigate a variety of tasks in cross-domain dialogue modeling, such as dialogue state tracking, policy learning, user simulation, etc.","label_nlp4sg":0,"task":null,"method":null,"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"This work was supported by the National Science Foundation of China (grant no. 61936010\/ 61876096) and the National Key R&D Program of China (grant no. 2018YFC0830200). We would like to thank THUNUS NExT JointLab for the support. We would also like to thank Ryuichi Takanobu and Fei Mi for their constructive comments. We are grateful to our action editor, Bonnie Webber, and the anonymous reviewers for their valuable suggestions and feedback.","year":2020,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} -{"ID":"liu-etal-2010-improving-statistical","url":"https:\/\/aclanthology.org\/P10-1085","title":"Improving Statistical Machine Translation with Monolingual Collocation","abstract":"This paper proposes to use monolingual collocations to improve Statistical Machine Translation (SMT). We make use of the collocation probabilities, which are estimated from monolingual corpora, in two aspects, namely improving word alignment for various kinds of SMT systems and improving phrase table for phrase-based SMT. The experimental results show that our method improves the performance of both word alignment and translation quality significantly. As compared to baseline systems, we achieve absolute improvements of 2.40 BLEU score on a phrase-based SMT system and 1.76 BLEU score on a parsing-based SMT system.","label_nlp4sg":0,"task":null,"method":null,"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2010,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} -{"ID":"nn-1983-center","url":"https:\/\/aclanthology.org\/J83-1006","title":"Center for the Study of Language and Information","abstract":"It's a pleasure to assume the editorship of The FINITE STRING, since it is such an important resource for our discipline and its community of researchers.\nThe success of The FINITE STRING depends on two factors:","label_nlp4sg":0,"task":null,"method":null,"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":1983,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} -{"ID":"mckeown-2005-text","url":"https:\/\/aclanthology.org\/U05-1002","title":"Text Summarization: News and Beyond","abstract":"Redundancy in large text collections, such as the web, creates both problems and opportunities for natural language systems. On the one hand, the presence of numerous sources conveying the same information causes difficulties for end users of search engines and news providers; they must read the same information over and over again. On the other hand, redundancy can be exploited to identify important and accurate information for applications such as summarization and question answering.","label_nlp4sg":0,"task":null,"method":null,"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2005,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} -{"ID":"nn-1990-coling","url":"https:\/\/aclanthology.org\/C90-1026","title":"COLING 90: Contents in Volumes 1-3","abstract":"The papers in each category are sorted alphabetically according to the name of the first author. The subdivision into volumes has no deep interpretation. Its sole purpose was to free Coling participants from carrying all three volumes around at all times. For convenient overview and retrieval, the titels of some papers listed below have been abridged by the editor. When quoted, each paper should preferably be cited with the heading given at the top of the paper. No attempts have been made to normalize the name forms of the authors. Spelling and transcription have been retained as used by the authors.","label_nlp4sg":0,"task":null,"method":null,"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":1990,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} -{"ID":"sager-1981-types","url":"https:\/\/aclanthology.org\/1981.tc-1.2","title":"Types of translation and text forms in the environment of machine translation (MT)","abstract":"Human translation consists of a number of separate steps which begin with the identification of the text type, the purpose and intention of the text, the subject area, etc. As there are types of texts there are also types of translation, which do not necessarily match directly. Since the human and machine translation processes differ so must the criteria which determine translatability. What criteria are relevant for MT and can they be derived from observations of the human effort?","label_nlp4sg":0,"task":null,"method":null,"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":1981,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} -{"ID":"molla-etal-2007-named","url":"https:\/\/aclanthology.org\/U07-1010","title":"Named Entity Recognition in Question Answering of Speech Data","abstract":"Question answering on speech transcripts (QAst) is a pilot track of the CLEF competition. In this paper we present our contribution to QAst, which is centred on a study of Named Entity (NE) recognition on speech transcripts, and how it impacts on the accuracy of the final question answering system. We have ported AFNER, the NE recogniser of the AnswerFinder questionanswering project, to the set of answer types expected in the QAst track. AFNER uses a combination of regular expressions, lists of names (gazetteers) and machine learning to find NeWS in the data. The machine learning component was trained on a development set of the AMI corpus. In the process we identified various problems with scalability of the system and the existence of errors of the extracted annotation, which lead to relatively poor performance in general. Performance was yet comparable with state of the art, and the system was second (out of three participants) in one of the QAst subtasks.","label_nlp4sg":0,"task":null,"method":null,"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2007,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} -{"ID":"vitorio-etal-2017-investigating","url":"https:\/\/aclanthology.org\/W17-6607","title":"Investigating Opinion Mining through Language Varieties: a Case Study of Brazilian and European Portuguese tweets","abstract":"Portuguese is a pluricentric language comprising variants that differ from each other in different linguistic levels. It is generally agreed that applying text mining resources developed for one specific variant may produce a different result in another variant, but very little research has been done to measure this difference. This study presents an analysis of opinion mining application when dealing with the two main Portuguese language variants: Brazilian and European. According to the experiments, it was observed that the differences between the Portuguese variants reflect on the application results. The use of a variant for training and another for testing brings a substantial performance drop, but the separation of the variants may not be recommended.","label_nlp4sg":0,"task":null,"method":null,"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2017,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} -{"ID":"xiao-etal-2007-empirical","url":"https:\/\/aclanthology.org\/O07-4002","title":"An Empirical Study of Non-Stationary Ngram Model and its Smoothing Techniques","abstract":"Recently many new techniques have been proposed for language modeling, such as ME, MEMM and CRF. However, the ngram model is still a staple in practical applications. It is well worthy of studying how to improve the performance of the ngram model. This paper enhances the traditional ngram model by relaxing the stationary hypothesis on the Markov chain and exploiting the word positional information. Such an assumption is made that the probability of the current word is determined not only by history words but also by the words positions in the sentence. The non-stationary ngram model (NS ngram model) is proposed. Several related issues are discussed in detail, including the definition of the NS ngram model, the representation of the word positional information and the estimation of the conditional probability. In addition, three smoothing approaches are proposed to solve the data sparseness problem of the NS ngram model. Several smoothing algorithms are presented in each approach. In the experiments, the NS ngram model is evaluated on the pinyin-to-character conversion task which is the core technique of the Chinese text input method. Experimental results show that the NS ngram model outperforms the traditional ngram model significantly by the exploitation of the word positional information. In addition, the proposed smoothing techniques solve the data sparseness problem of the NS ngram model effectively with great error rate reduction.","label_nlp4sg":0,"task":null,"method":null,"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"This investigation was supported by the key project of the National Natural Science We especially thank the anonymous reviewers for their valuable suggestions and comments.","year":2007,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} -{"ID":"zhang-etal-2012-learning","url":"https:\/\/aclanthology.org\/D12-1125","title":"Learning to Map into a Universal POS Tagset","abstract":"We present an automatic method for mapping language-specific part-of-speech tags to a set of universal tags. This unified representation plays a crucial role in cross-lingual syntactic transfer of multilingual dependency parsers. Until now, however, such conversion schemes have been created manually. Our central hypothesis is that a valid mapping yields POS annotations with coherent linguistic properties which are consistent across source and target languages. We encode this intuition in an objective function that captures a range of distributional and typological characteristics of the derived mapping. Given the exponential size of the mapping space, we propose a novel method for optimizing over soft mappings, and use entropy regularization to drive those towards hard mappings. Our results demonstrate that automatically induced mappings rival the quality of their manually designed counterparts when evaluated in the context of multilingual parsing. 1","label_nlp4sg":0,"task":null,"method":null,"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"The authors acknowledge the support of the NSF (IIS-0835445), the MURI program (W911NF-10-1-0533) and the DARPA BOLT program. We thank Tommi Jaakkola, the members of the MIT NLP group and the ACL reviewers for their suggestions and comments. Any opinions, findings, conclusions, or recommendations expressed in this paper are those of the authors, and do not necessarily reflect the views of the funding organizations.","year":2012,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} -{"ID":"chronopoulou-etal-2020-lmu","url":"https:\/\/aclanthology.org\/2020.wmt-1.128","title":"The LMU Munich System for the WMT 2020 Unsupervised Machine Translation Shared Task","abstract":"This paper describes the submission of LMU Munich to the WMT 2020 unsupervised shared task, in two language directions, German\u2194Upper Sorbian. Our core unsupervised neural machine translation (UNMT) system follows the strategy of Chronopoulou et al. (2020), using a monolingual pretrained language generation model (on German) and finetuning it on both German and Upper Sorbian, before initializing a UNMT model, which is trained with online backtranslation. Pseudoparallel data obtained from an unsupervised statistical machine translation (USMT) system is used to fine-tune the UNMT model. We also apply BPE-Dropout to the low-resource (Upper Sorbian) data to obtain a more robust system. We additionally experiment with residual adapters and find them useful in the Upper Sorbian\u2192German direction. We explore sampling during backtranslation and curriculum learning to use SMT translations in a more principled way. Finally, we ensemble our bestperforming systems and reach a BLEU score of 32.4 on German\u2192Upper Sorbian and 35.2 on Upper Sorbian\u2192German.","label_nlp4sg":0,"task":null,"method":null,"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"This work was supported by the European Research Council (ERC) under the European Union's Horizon 2020 research and innovation programme (grant agreement No. 640550) and by the German Research Foundation (DFG; grant FR 2829\/4-1). We would like to thank Jind\u0159ich Libovick\u00fd for fruitful discussions regarding the use of BPE-Dropout as a data augmentation technique.","year":2020,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} -{"ID":"yoon-etal-2017-adullam","url":"https:\/\/aclanthology.org\/S17-2123","title":"Adullam at SemEval-2017 Task 4: Sentiment Analyzer Using Lexicon Integrated Convolutional Neural Networks with Attention","abstract":"We propose a sentiment analyzer for the prediction of document-level sentiments of English micro-blog messages from Twitter. The proposed method is based on lexicon integrated convolutional neural networks with attention (LCA). Its performance was evaluated using the datasets provided by SemEval competition (Task 4). The proposed sentiment analyzer obtained an average F1 of 55.2%, an average recall of 58.9% and an accuracy of 61.4%.","label_nlp4sg":0,"task":null,"method":null,"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"This research was supported by Basic Science Research Program through the National Research Foundation of Korea (NRF) funded by the Ministry of Science, ICT & Future Planning (NRF-2017R1A2B4003558).","year":2017,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} -{"ID":"xu-etal-2021-temporal","url":"https:\/\/aclanthology.org\/2021.naacl-main.202","title":"Temporal Knowledge Graph Completion using a Linear Temporal Regularizer and Multivector Embeddings","abstract":"Representation learning approaches for knowledge graphs have been mostly designed for static data. However, many knowledge graphs involve evolving data, e.g., the fact (The President of the United States is Barack Obama) is valid only from 2009 to 2017. This introduces important challenges for knowledge representation learning since the knowledge graphs change over time. In this paper, we present a novel time-aware knowledge graph embebdding approach, TeLM, which performs 4th-order tensor factorization of a Temporal knowledge graph using a Linear temporal regularizer and Multivector embeddings. Moreover, we investigate the effect of the temporal dataset's time granularity on temporal knowledge graph completion. Experimental results demonstrate that our proposed models trained with the linear temporal regularizer achieve the state-of-the-art performances on link prediction over four well-established temporal knowledge graph completion benchmarks.","label_nlp4sg":0,"task":null,"method":null,"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"This work is supported by the EC Horizon 2020 grant LAMBDA (GA no. 809965), the CLEOPA-TRA project (GA no. 812997) and the China Scholarship Council (CSC).","year":2021,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} -{"ID":"bawden-etal-2020-findings","url":"https:\/\/aclanthology.org\/2020.wmt-1.76","title":"Findings of the WMT 2020 Biomedical Translation Shared Task: Basque, Italian and Russian as New Additional Languages","abstract":"Machine translation of scientific abstracts and terminologies has the potential to support health professionals and biomedical researchers in some of their activities. In the fifth edition of the WMT Biomedical Task, we addressed a total of eight language pairs. Five language pairs were previously addressed in past editions of the shared task, namely","label_nlp4sg":1,"task":null,"method":null,"goal1":"Good Health and Well-Being","goal2":null,"goal3":null,"acknowledgments":"We would like to thank all participants in the challenges, and especially those who supported us for the manual evaluation. 22 As a reference, one of the participating systems (UTS_NLP) was able to re-run their system over the real test set. The performance drop was 0.08 for accuracy (from 0.73 to 0.65), and 0.05 for BLEU (from 0.71 to 0.66).","year":2020,"sdg1":0,"sdg2":0,"sdg3":1,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} -{"ID":"sun-etal-2009-prediction","url":"https:\/\/aclanthology.org\/P09-2064","title":"Prediction of Thematic Rank for Structured Semantic Role Labeling","abstract":"In Semantic Role Labeling (SRL), it is reasonable to globally assign semantic roles due to strong dependencies among arguments. Some relations between arguments significantly characterize the structural information of argument structure. In this paper, we concentrate on thematic hierarchy that is a rank relation restricting syntactic realization of arguments. A loglinear model is proposed to accurately identify thematic rank between two arguments. To import structural information, we employ re-ranking technique to incorporate thematic rank relations into local semantic role classification results. Experimental results show that automatic prediction of thematic hierarchy can help semantic role classification.","label_nlp4sg":0,"task":null,"method":null,"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"This work is supported by NSFC Project 60873156, 863 High Technology Project of China 2006AA01Z144 and the project of Toshiba (China) Co., Ltd. R&D Center.","year":2009,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} -{"ID":"akbik-vollgraf-2018-zap","url":"https:\/\/aclanthology.org\/L18-1344","title":"ZAP: An Open-Source Multilingual Annotation Projection Framework","abstract":"Previous work leveraged annotation projection as a convenient method to automatically generate linguistic resources such as treebanks or propbanks for new languages. This approach automatically transfers linguistic annotation from a resource-rich source language (SL) to translations in a target language (TL). However, to the best of our knowledge, no publicly available framework for this approach currently exists, limiting researchers' ability to reproduce and compare experiments. In this paper, we present ZAP, the first open-source framework for annotation projection in parallel corpora. Our framework is Java-based and includes methods for preprocessing corpora, computing word-alignments between sentence pairs, transferring different layers of linguistic annotation, and visualization. The framework was designed for ease-of-use with lightweight APIs. We give an overview of ZAP and illustrate its usage.","label_nlp4sg":0,"task":null,"method":null,"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"We would like to thank the anonymous reviewers for their helpful comments. This project has received funding from the European Union's Horizon 2020 research and innovation programme under grant agreement no 732328 (\"FashionBrain\").","year":2018,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} -{"ID":"gustafson-capkova-2001-interaction","url":"https:\/\/aclanthology.org\/W01-1704","title":"The interaction between local focusing structure and global intentions in spoken discourse","abstract":"The purpose of the study reported in this paper is to investigate how local focusing structure, analysed in terms of Centering Theory (Grosz, Joshi & Weinstein, 1995), and global d iscourse structure, analysed in terms of discourse segments and discourse segment purposes (Grosz & Sidner, 1986), interact. Swedish dialogue was analysed according to Centering Theory and Grosz and Sidners (1986) discourse theory. The results indicate an interaction between locally implicit elements and global intentions. Also indications concerning discourse markers varying intonation were found.","label_nlp4sg":0,"task":null,"method":null,"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2001,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} -{"ID":"chen-etal-2020-ferryman","url":"https:\/\/aclanthology.org\/2020.semeval-1.35","title":"Ferryman at SemEval-2020 Task 3: Bert with TFIDF-Weighting for Predicting the Effect of Context in Word Similarity","abstract":"Word similarity is widely used in machine learning applications like searching engine and recommendation. Measuring the changing meaning of the same word between two different sentences is not only a way to handle complex features in word usage (such as sentence syntax and semantics), but also an important method for different word polysemy modeling. In this paper, we present the methodology proposed by team Ferryman. Our system is based on the Bidirectional Encoder Representations from Transformers (BERT) model combined with term frequency-inverse document frequency (TF-IDF), applying the method on the provided datasets called CoSimLex, which covers four different languages including English, Croatian, Slovene, and Finnish. Our team Ferryman wins the the first position for English task and the second position for Finnish in the subtask 1.","label_nlp4sg":0,"task":null,"method":null,"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2020,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} -{"ID":"davoodi-etal-2022-modeling","url":"https:\/\/aclanthology.org\/2022.acl-long.22","title":"Modeling U.S. State-Level Policies by Extracting Winners and Losers from Legislative Texts","abstract":"Decisions on state-level policies have a deep effect on many aspects of our everyday life, such as health-care and education access. However, there is little understanding of how these policies and decisions are being formed in the legislative process. We take a datadriven approach by decoding the impact of legislation on relevant stakeholders (e.g., teachers in education bills) to understand legislators' decision-making process and votes. We build a new dataset for multiple US states that interconnects multiple sources of data including bills, stakeholders, legislators, and money donors. Next, we develop a textual graphbased model to embed and analyze state bills. Our model predicts winners\/losers of bills and then utilizes them to better determine the legislative body's vote breakdown according to demographic\/ideological criteria, e.g., gender.","label_nlp4sg":1,"task":null,"method":null,"goal1":"Peace, Justice and Strong Institutions","goal2":null,"goal3":null,"acknowledgments":"We would like to acknowledge the members of the PurdueNLP lab. We also thank the reviewers for their constructive feedback. The funding for the use of mTurk was part of the Purdue University Integrative Data Science Initiative: Data Science for Ethics, Society, and Policy Focus Area. This work was partially supported by an NSF CAREER award IIS-2048001.","year":2022,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":1,"sdg17":0} -{"ID":"egan-2012-machine","url":"https:\/\/aclanthology.org\/2012.amta-government.5","title":"Machine Translation Revisited: An Operational Reality Check","abstract":null,"label_nlp4sg":0,"task":null,"method":null,"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2012,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} -{"ID":"banchs-li-2012-iris","url":"https:\/\/aclanthology.org\/P12-3007","title":"IRIS: a Chat-oriented Dialogue System based on the Vector Space Model","abstract":"This system demonstration paper presents IRIS (Informal Response Interactive System), a chat-oriented dialogue system based on the vector space model framework. The system belongs to the class of examplebased dialogue systems and builds its chat capabilities on a dual search strategy over a large collection of dialogue samples. Additional strategies allowing for system adaptation and learning implemented over the same vector model space framework are also described and discussed.","label_nlp4sg":0,"task":null,"method":null,"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"The authors would like to thank the Institute for Infocomm Research for its support and permission to publish this work.","year":2012,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} -{"ID":"habernal-gurevych-2016-argument","url":"https:\/\/aclanthology.org\/P16-1150","title":"Which argument is more convincing? Analyzing and predicting convincingness of Web arguments using bidirectional LSTM","abstract":"We propose a new task in the field of computational argumentation in which we investigate qualitative properties of Web arguments, namely their convincingness. We cast the problem as relation classification, where a pair of arguments having the same stance to the same prompt is judged. We annotate a large datasets of 16k pairs of arguments over 32 topics and investigate whether the relation \"A is more convincing than B\" exhibits properties of total ordering; these findings are used as global constraints for cleaning the crowdsourced data. We propose two tasks: (1) predicting which argument from an argument pair is more convincing and (2) ranking all arguments to the topic based on their convincingness. We experiment with feature-rich SVM and bidirectional LSTM and obtain 0.76-0.78 accuracy and 0.35-0.40 Spearman's correlation in a cross-topic evaluation. We release the newly created corpus UKPConvArg1 and the experimental software under open licenses.","label_nlp4sg":0,"task":null,"method":null,"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"This work has been supported by the Volkswagen Foundation as part of the Lichtenberg-Professorship Program under grant N o I\/82806, by the German Institute for Educational Research (DIPF), by the German Research Foundation (DFG) via the German-Israeli Project Cooperation (DIP, grant DA 1600\/1-1), by the GRK 1994 AIPHES (DFG), and by Amazon Web Services in Education Grant award. Lastly, we would like to thank the anonymous reviewers for their valuable feedback.","year":2016,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} -{"ID":"zhou-etal-2021-commonsense","url":"https:\/\/aclanthology.org\/2021.sigdial-1.13","title":"Commonsense-Focused Dialogues for Response Generation: An Empirical Study","abstract":"Smooth and effective communication requires the ability to perform latent or explicit commonsense inference. Prior commonsense reasoning benchmarks (such as SocialIQA and CommonsenseQA) mainly focus on the discriminative task of choosing the right answer from a set of candidates, and do not involve interactive language generation as in dialogue. Moreover, existing dialogue datasets do not explicitly focus on exhibiting commonsense as a facet. In this paper, we present an empirical study of commonsense in dialogue response generation. We first auto-extract commonsensical dialogues from existing dialogue datasets by leveraging ConceptNet, a commonsense knowledge graph. Furthermore, building on social contexts\/situations in SocialIQA, we collect a new dialogue dataset with 25K dialogues aimed at exhibiting social commonsense in an interactive setting. We evaluate response generation models trained using these datasets and find that models trained on both extracted and our collected data produce responses that consistently exhibit more commonsense than baselines. Finally we propose an approach for automatic evaluation of commonsense that relies on features derived from ConceptNet and pretrained language and dialog models, and show reasonable correlation with human evaluation of responses' commonsense quality. 1 * Work done while Pei Zhou was an intern at Amazon Alexa AI 1 Data and code will be released soon.","label_nlp4sg":0,"task":null,"method":null,"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2021,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} -{"ID":"perera-etal-2018-building","url":"https:\/\/aclanthology.org\/W18-1402","title":"Building and Learning Structures in a Situated Blocks World Through Deep Language Understanding","abstract":"We demonstrate a system for understanding natural language utterances for structure description and placement in a situated blocks world context. By relying on a rich, domainspecific adaptation of a generic ontology and a logical form structure produced by a semantic parser, we obviate the need for an intermediate, domain-specific representation and can produce a reasoner that grounds and reasons over concepts and constraints with real-valued data. This linguistic base enables more flexibility in interpreting natural language expressions invoking intrinsic concepts and features of structures and space. We demonstrate some of the capabilities of a system grounded in deep language understanding and present initial results in a structure learning task.","label_nlp4sg":0,"task":null,"method":null,"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"This work is supported by the DARPA CwC program and the DARPA Big Mechanism program under ARO contract W911NF-14-1-0391. Special thanks to SRI for their work in developing the physical apparatus, including block detection and avatar software.","year":2018,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} -{"ID":"mclaughlin-schwall-1998-horses","url":"https:\/\/aclanthology.org\/1998.tc-1.10","title":"Horses for Courses: Changing User Acceptance of Machine Translation","abstract":"The key to Machine Translation becoming a commonplace technology is user acceptance. Unfortunately, the decision whether or not to use Machine Translation is often made on the basis of output quality alone. As we all know, Machine Translation output is far from perfect, and its quality depends on a wide range of factors related to individual users, the environment in which they work, and the text types they work with-factors which are difficult and arduous to evaluate. Although output quality obviously plays an important role, it is not the only factor in user acceptance-and for some potential users it may not even be the most important one. User perception of Machine Translation is a decisive issue, and MT must be seen-not as a universal translation solution, but as one of several potential tools-not in isolation, but within the context of the user's work processes. This has important implications for Machine Translation vendors. It means that Machine Translation shouldn't be offered in isolation. Depending on the product\/target group, it must be combined with other tools and\/or combined with other services (postediting\/human translation). Products must also be scaled to the user's purse and environment, the entry threshold must be low and products must be upgradeable as the user's needs change. It must be easy to access and use Machine Translation: complicated access to Machine Translation and arduous preprocessing activities will make it a non-starter for many people. What's more, Machine Translation must be available when and where the user needs it, whatever the application.","label_nlp4sg":0,"task":null,"method":null,"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":1998,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} -{"ID":"rosset-etal-2013-automatic","url":"https:\/\/aclanthology.org\/W13-2321","title":"Automatic Named Entity Pre-annotation for Out-of-domain Human Annotation","abstract":"Automatic pre-annotation is often used to improve human annotation speed and accuracy. We address here out-of-domain named entity annotation, and examine whether automatic pre-annotation is still beneficial in this setting. Our study design includes two different corpora, three pre-annotation schemes linked to two annotation levels, both expert and novice annotators, a questionnaire-based subjective assessment and a corpus-based quantitative assessment. We observe that preannotation helps in all cases, both for speed and for accuracy, and that the subjective assessment of the annotators does not always match the actual benefits measured in the annotation outcome.","label_nlp4sg":0,"task":null,"method":null,"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"This work has been partially funded by OSEO under the Quaero program and by the French ANR VERA project.","year":2013,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} -{"ID":"jain-lapata-2021-memory","url":"https:\/\/aclanthology.org\/2021.tacl-1.71","title":"Memory-Based Semantic Parsing","abstract":"We present a memory-based model for contextdependent semantic parsing. Previous approaches focus on enabling the decoder to copy or modify the parse from the previous utterance, assuming there is a dependency between the current and previous parses. In this work, we propose to represent contextual information using an external memory. We learn a context memory controller that manages the memory by maintaining the cumulative meaning of sequential user utterances. We evaluate our approach on three semantic parsing benchmarks. Experimental results show that our model can better process context-dependent information and demonstrates improved performance without using task-specific decoders.","label_nlp4sg":0,"task":null,"method":null,"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"We thank Mike Lewis, Miguel Ballesteros, and our anonymous reviewers for their feedback. We are grateful to Alex Lascarides and Ivan Titov for their comments on the paper. This work was supported in part by Huawei and the UKRI Centre for Doctoral Training in Natural Language Processing (grant EP\/S022481\/1). Lapata acknowledges the support of the European Research Council (award number 681760, ''Translating Multiple Modalities into Text'').","year":2021,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} -{"ID":"de-marneffe-etal-2010-good","url":"https:\/\/aclanthology.org\/P10-1018","title":"``Was It Good? It Was Provocative.'' Learning the Meaning of Scalar Adjectives","abstract":"Texts and dialogues often express information indirectly. For instance, speakers' answers to yes\/no questions do not always straightforwardly convey a 'yes' or 'no' answer. The intended reply is clear in some cases (Was it good? It was great!) but uncertain in others (Was it acceptable? It was unprecedented.). In this paper, we present methods for interpreting the answers to questions like these which involve scalar modifiers. We show how to ground scalar modifier meaning based on data collected from the Web. We learn scales between modifiers and infer the extent to which a given answer conveys 'yes' or 'no'. To evaluate the methods, we collected examples of question-answer pairs involving scalar modifiers from CNN transcripts and the Dialog Act corpus and use response distributions from Mechanical Turk workers to assess the degree to which each answer conveys 'yes' or 'no'. Our experimental results closely match the Turkers' response data, demonstrating that meanings can be learned from Web data and that such meanings can drive pragmatic inference.","label_nlp4sg":0,"task":null,"method":null,"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"This paper is based on work funded in part by ONR award N00014-10-1-0109 and ARO MURI award 548106, as well as by the Air Force Research Laboratory (AFRL) under prime contract no. FA8750-09-C-0181. Any opinions, findings, and conclusion or recommendations expressed in this material are those of the authors and do not necessarily reflect the view of the Air Force Research Laboratory (AFRL), ARO or ONR.","year":2010,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} -{"ID":"miyazawa-etal-1999-study","url":"https:\/\/aclanthology.org\/1999.mtsummit-1.43","title":"Study on evaluation of WWW MT systems","abstract":"Compared with off-line machine translation (MT). MT for the WWW has more evaluation factors such as translation accuracy of text, interpretation of HTML tags, consistency with various protocols and browsers, and translation speed for net surfing. Moreover, the speed of technical innovation and its practical application is fast, including the appearance of new protocols. Improvement of MT software for the WWW will enable the sharing of information from around the world and make a great deal of con tr ibution to mank ind. D esp ite th e importance of general evaluation studies on MT software for the WWW. it appears that such studies have not yet been conducted. Since MT for the WWW will be a critical factor for future international communication, its study and evaluation is an important theme. This study aims at standardized evaluation of MT for the WWW. and suggests an evaluation method focusing on unique aspects of the WWW independent of text. This evaluation method has a wide range of aptitude without depending on specific languages. Twenty-four items specific to the WWW were actually evaluated with regard to six MT software for the WWW. This study clarified various issues which should be improved in the future regarding MT software for the WWW and issues on evaluation technology of MT on the Internet.","label_nlp4sg":0,"task":null,"method":null,"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":1999,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} -{"ID":"xiao-etal-2021-end","url":"https:\/\/aclanthology.org\/2021.emnlp-main.280","title":"End-to-End Conversational Search for Online Shopping with Utterance Transfer","abstract":"Successful conversational search systems can present natural, adaptive and interactive shopping experience for online shopping customers. However, building such systems from scratch faces real word challenges from both imperfect product schema\/knowledge and lack of training dialog data. In this work we first propose ConvSearch, an end-to-end conversational search system that deeply combines the dialog system with search. It leverages the text profile to retrieve products, which is more robust against imperfect product schema\/knowledge compared with using product attributes alone. We then address the lack of data challenges by proposing an utterance transfer approach that generates dialogue utterances by using existing dialog from other domains, and leveraging the search behavior data from e-commerce retailer. With utterance transfer, we introduce a new conversational search dataset for online shopping. Experiments show that our utterance transfer method can significantly improve the availability of training dialogue data without crowd-sourcing, and the conversational search system significantly outperformed the best tested baseline.","label_nlp4sg":0,"task":null,"method":null,"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2021,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} -{"ID":"nakazawa-kurohashi-2009-statistical","url":"https:\/\/aclanthology.org\/W09-2302","title":"Statistical Phrase Alignment Model Using Dependency Relation Probability","abstract":"When aligning very different language pairs, the most important needs are the use of structural information and the capability of generating one-to-many or many-to-many correspondences. In this paper, we propose a novel phrase alignment method which models word or phrase dependency relations in dependency tree structures of source and target languages. The dependency relation model is a kind of tree-based reordering model, and can handle non-local reorderings which sequential word-based models often cannot handle properly. The model is also capable of estimating phrase correspondences automatically without any heuristic rules. Experimental results of alignment show that our model could achieve F-measure 1.7 points higher than the conventional word alignment model with symmetrization algorithms.","label_nlp4sg":0,"task":null,"method":null,"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2009,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} -{"ID":"wang-etal-2019-vizseq","url":"https:\/\/aclanthology.org\/D19-3043","title":"VizSeq: a visual analysis toolkit for text generation tasks","abstract":"Automatic evaluation of text generation tasks (e.g. machine translation, text summarization, image captioning and video description) usually relies heavily on task-specific metrics, such as BLEU (Papineni et al., 2002) and ROUGE (Lin, 2004). They, however, are abstract numbers and are not perfectly aligned with human assessment. This suggests inspecting detailed examples as a complement to identify system error patterns. In this paper, we present VizSeq, a visual analysis toolkit for instance-level and corpus-level system evaluation on a wide variety of text generation tasks. It supports multimodal sources and multiple text references, providing visualization in Jupyter notebook or a web app interface. It can be used locally or deployed onto public servers for centralized data hosting and benchmarking. It covers most common n-gram based metrics accelerated with multiprocessing, and also provides latest embedding-based metrics such as BERTScore (Zhang et al., 2019).","label_nlp4sg":0,"task":null,"method":null,"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"We thank the anonymous reviewers for their comments. We also thank Ann Lee and Pratik Ringshia for helpful discussions on this project.","year":2019,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} -{"ID":"kraft-etal-2016-embedding","url":"https:\/\/aclanthology.org\/D16-1221","title":"An Embedding Model for Predicting Roll-Call Votes","abstract":"We develop a novel embedding-based model for predicting legislative roll-call votes from bill text. The model introduces multidimensional ideal vectors for legislators as an alternative to single dimensional ideal point models for quantitatively analyzing roll-call data. These vectors are learned to correspond with pre-trained word embeddings which allows us to analyze which features in a bill text are most predictive of political support. Our model is quite simple, while at the same time allowing us to successfully predict legislator votes on specific bills with higher accuracy than past methods.","label_nlp4sg":1,"task":null,"method":null,"goal1":"Peace, Justice and Strong Institutions","goal2":null,"goal3":null,"acknowledgments":null,"year":2016,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":1,"sdg17":0} -{"ID":"charbonnier-wartena-2018-using","url":"https:\/\/aclanthology.org\/C18-1221","title":"Using Word Embeddings for Unsupervised Acronym Disambiguation","abstract":"Scientific papers from all disciplines contain many abbreviations and acronyms. In many cases these acronyms are ambiguous. We present a method to choose the contextual correct definition of an acronym that does not require training for each acronym and thus can be applied to a large number of different acronyms with only few instances. We constructed a set of 19,954 examples of 4,365 ambiguous acronyms from image captions in scientific papers along with their contextually correct definition from different domains. We learn word embeddings for all words in the corpus and compare the averaged context vector of the words in the expansion of an acronym with the weighted average vector of the words in the context of the acronym. We show that this method clearly outperforms (classical) cosine similarity. Furthermore, we show that word embeddings learned from a 1 billion word corpus of scientific texts outperform word embeddings learned from much larger general corpora.","label_nlp4sg":0,"task":null,"method":null,"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2018,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} -{"ID":"ursini-akagi-2011-interpretation","url":"https:\/\/aclanthology.org\/U11-1018","title":"The Interpretation of Plural Pronouns in Discourse: The Case of They","abstract":"This paper presents an experimental study on the interpretation of plural pronoun they in discourse, and offers an answer to two questions. The first question is whether the anaphoric interpretation of they corresponds to that of its antecedent NP(maximal interpretation), or by the \"whole\" previous sentence (reference interpretation). The second question is whether speakers may access only one interpretation or both, although at different \"moments\" in discourse. The answers to these questions suggest that an accurate logical and psychological model of anaphora resolution includes two principles. A first principle finds a \"default\" interpretation, a second principle determines when the \"alternative\" interpretation can (and must) be accessed.","label_nlp4sg":0,"task":null,"method":null,"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2011,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} -{"ID":"brooke-etal-2017-unsupervised","url":"https:\/\/aclanthology.org\/Q17-1032","title":"Unsupervised Acquisition of Comprehensive Multiword Lexicons using Competition in an n-gram Lattice","abstract":"We present a new model for acquiring comprehensive multiword lexicons from large corpora based on competition among n-gram candidates. In contrast to the standard approach of simple ranking by association measure, in our model n-grams are arranged in a lattice structure based on subsumption and overlap relationships, with nodes inhibiting other nodes in their vicinity when they are selected as a lexical item. We show how the configuration of such a lattice can be optimized tractably, and demonstrate using annotations of sampled n-grams that our method consistently outperforms alternatives by at least 0.05 F-score across several corpora and languages.","label_nlp4sg":0,"task":null,"method":null,"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"The second author was supported by an Endeavour Research Fellowship from the Australian Government, and in part by the Croatian Science Foundation under project UIP-2014-09-7312. We would also like to thank our English, Japanese, and Croatian annotators, and the TACL reviewers and editors for helping shape this paper into its current form.","year":2017,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} -{"ID":"stathopoulos-teufel-2015-retrieval","url":"https:\/\/aclanthology.org\/P15-2055","title":"Retrieval of Research-level Mathematical Information Needs: A Test Collection and Technical Terminology Experiment","abstract":"In this paper, we present a test collection for mathematical information retrieval composed of real-life, researchlevel mathematical information needs. Topics and relevance judgements have been procured from the on-line collaboration website MathOverflow by delegating domain-specific decisions to experts on-line. With our test collection, we construct a baseline using Lucene's vectorspace model implementation and conduct an experiment to investigate how prior extraction of technical terms from mathematical text can affect retrieval efficiency. We show that by boosting the importance of technical terms, statistically significant improvements in retrieval performance can be obtained over the baseline.","label_nlp4sg":1,"task":null,"method":null,"goal1":"Industry, Innovation and Infrastructure","goal2":null,"goal3":null,"acknowledgments":null,"year":2015,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":1,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} -{"ID":"collins-2002-ranking","url":"https:\/\/aclanthology.org\/P02-1062","title":"Ranking Algorithms for Named Entity Extraction: Boosting and the VotedPerceptron","abstract":"This paper describes algorithms which rerank the top N hypotheses from a maximum-entropy tagger, the application being the recovery of named-entity boundaries in a corpus of web data. The first approach uses a boosting algorithm for ranking problems. The second approach uses the voted perceptron algorithm. Both algorithms give comparable, significant improvements over the maximum-entropy baseline. The voted perceptron algorithm can be considerably more efficient to train, at some cost in computation on test examples.","label_nlp4sg":0,"task":null,"method":null,"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"Acknowledgements Many thanks to Jack Minisi for annotating the named-entity data used in the exper-iments. Thanks also to Nigel Duffy, Rob Schapire and Yoram Singer for several useful discussions.","year":2002,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} -{"ID":"thadani-mckeown-2011-towards","url":"https:\/\/aclanthology.org\/W11-1606","title":"Towards Strict Sentence Intersection: Decoding and Evaluation Strategies","abstract":"We examine the task of strict sentence intersection: a variant of sentence fusion in which the output must only contain the information present in all input sentences and nothing more. Our proposed approach involves alignment and generalization over the input sentences to produce a generation lattice; we then compare a standard search-based approach for decoding an intersection from this lattice to an integer linear program that preserves aligned content while minimizing the disfluency in interleaving text segments. In addition, we introduce novel evaluation strategies for intersection problems that employ entailmentstyle judgments for determining the validity of system-generated intersections. Our experiments show that the proposed models produce valid intersections a majority of the time and that the segmented decoder yields advantages over the search-based approach.","label_nlp4sg":0,"task":null,"method":null,"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"The authors are grateful to the anonymous reviewers for their helpful feedback. This material is based on research supported in part by the U.S. National Science Foundation (NSF) under IIS-05-34871. Any opinions, findings and conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect the views of the NSF.","year":2011,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} -{"ID":"zhao-etal-2005-bilingual","url":"https:\/\/aclanthology.org\/W05-0804","title":"Bilingual Word Spectral Clustering for Statistical Machine Translation","abstract":"In this paper, a variant of a spectral clustering algorithm is proposed for bilingual word clustering. The proposed algorithm generates the two sets of clusters for both languages efficiently with high semantic correlation within monolingual clusters, and high translation quality across the clusters between two languages. Each cluster level translation is considered as a bilingual concept, which generalizes words in bilingual clusters. This scheme improves the robustness for statistical machine translation models. Two HMMbased translation models are tested to use these bilingual clusters. Improved perplexity, word alignment accuracy, and translation quality are observed in our experiments.","label_nlp4sg":0,"task":null,"method":null,"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2005,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} -{"ID":"lietard-etal-2021-language","url":"https:\/\/aclanthology.org\/2021.blackboxnlp-1.40","title":"Do Language Models Know the Way to Rome?","abstract":"The global geometry of language models is important for a range of applications, but language model probes tend to evaluate rather local relations, for which ground truths are easily obtained. In this paper we exploit the fact that in geography, ground truths are available beyond local relations. In a series of experiments, we evaluate the extent to which language model representations of city and country names are isomorphic to real-world geography, e.g., if you tell a language model where Paris and Berlin are, does it know the way to Rome? We find that language models generally encode limited geographic information, but with larger models performing the best, suggesting that geographic knowledge can be induced from higher-order cooccurrence statistics.","label_nlp4sg":0,"task":null,"method":null,"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"We thank the anonymous reviewers. Mostafa Abdou was funded by a Google Focused Research Award. We used data created by MaxMind, available from http:\/\/www.maxmind.com\/.","year":2021,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} -{"ID":"dcosta-etal-2020-multiple","url":"https:\/\/aclanthology.org\/2020.clinicalnlp-1.2","title":"Multiple Sclerosis Severity Classification From Clinical Text","abstract":"Multiple Sclerosis (MS) is a chronic, inflammatory and degenerative neurological disease, which is monitored by a specialist using the Expanded Disability Status Scale (EDSS) and recorded in unstructured text in the form of a neurology consult note. An EDSS measurement contains an overall 'EDSS' score and several functional subscores. Typically, expert knowledge is required to interpret consult notes and generate these scores. Previous approaches used limited context length Word2Vec embeddings and keyword searches to predict scores given a consult note, but often failed when scores were not explicitly stated. In this work, we present MS-BERT, the first publicly available transformer model trained on real clinical data other than MIMIC. Next, we present MSBC, a classifier that applies MS-BERT to generate embeddings and predict EDSS and functional subscores. Lastly, we explore combining MSBC with other models through the use of Snorkel to generate scores for unlabelled consult notes. MSBC achieves state-of-the-art performance on all metrics and prediction tasks and outperforms the models generated from the Snorkel ensemble. We improve Macro-F1 by 0.12 (to 0.88) for predicting EDSS and on average by 0.29 (to 0.63) for predicting functional subscores over previous Word2Vec CNN and rule-based approaches.","label_nlp4sg":1,"task":null,"method":null,"goal1":"Good Health and Well-Being","goal2":null,"goal3":null,"acknowledgments":"We would like to thank the researchers and staff at the Data Science and Advanced Analytics (DSAA) team at St. Michael's Hospital, for providing consistent support and guidance throughout this project. We would also like to thank Dr. Marzyeh Ghassemi, and Taylor Killan for providing us the opportunity to work on this exciting project. Lastly, we would like to thank Dr. Tony Antoniou and Dr. Jiwon Oh from the MS clinic at St. Michael's Hospital for their support on the neurological examination notes.","year":2020,"sdg1":0,"sdg2":0,"sdg3":1,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} -{"ID":"mishra-etal-2019-modular","url":"https:\/\/aclanthology.org\/D19-1636","title":"A Modular Architecture for Unsupervised Sarcasm Generation","abstract":"In this paper, we propose a novel framework for sarcasm generation; the system takes a literal negative opinion as input and translates it into a sarcastic version. Our framework does not require any paired data for training. Sarcasm emanates from context-incongruity which becomes apparent as the sentence unfolds. Our framework introduces incongruity into the literal input version through modules that: (a) filter factual content from the input opinion, (b) retrieve incongruous phrases related to the filtered facts and (c) synthesize sarcastic text from the filtered and incongruous phrases. The framework employs reinforced neural sequence to sequence learning and information retrieval and is trained only using unlabeled non-sarcastic and sarcastic opinions. Since no labeled dataset exists for such a task, for evaluation, we manually prepare a benchmark dataset containing literal opinions and their sarcastic paraphrases. Qualitative and quantitative performance analyses on the data reveal our system's superiority over baselines, built using known unsupervised statistical and neural machine translation and style transfer techniques.","label_nlp4sg":0,"task":null,"method":null,"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2019,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} -{"ID":"cao-etal-2020-unsupervised-dual","url":"https:\/\/aclanthology.org\/2020.acl-main.608","title":"Unsupervised Dual Paraphrasing for Two-stage Semantic Parsing","abstract":"One daunting problem for semantic parsing is the scarcity of annotation. Aiming to reduce nontrivial human labor, we propose a two-stage semantic parsing framework, where the first stage utilizes an unsupervised paraphrase model to convert an unlabeled natural language utterance into the canonical utterance. The downstream naive semantic parser accepts the intermediate output and returns the target logical form. Furthermore, the entire training process is split into two phases: pre-training and cycle learning. Three tailored self-supervised tasks are introduced throughout training to activate the unsupervised paraphrase model. Experimental results on benchmarks OVERNIGHT and GE-OGRANNO demonstrate that our framework is effective and compatible with supervised training.","label_nlp4sg":0,"task":null,"method":null,"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"We thank the anonymous reviewers for their thoughtful comments.This work has been supported by the National Key Research and Development Program of China (Grant No. 2017YFB1002102) and Shanghai Jiao Tong University Scientific and Technological Innovation Funds (YG2020YQ01).","year":2020,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} -{"ID":"zhang-etal-2016-transition-based","url":"https:\/\/aclanthology.org\/P16-1040","title":"Transition-Based Neural Word Segmentation","abstract":"Character-based and word-based methods are two main types of statistical models for Chinese word segmentation, the former exploiting sequence labeling models over characters and the latter typically exploiting a transition-based model, with the advantages that word-level features can be easily utilized. Neural models have been exploited for character-based Chinese word segmentation, giving high accuracies by making use of external character embeddings, yet requiring less feature engineering. In this paper, we study a neural model for word-based Chinese word segmentation, by replacing the manuallydesigned discrete features with neural features in a word-based segmentation framework. Experimental results demonstrate that word features lead to comparable performances to the best systems in the literature, and a further combination of discrete and neural features gives top accuracies.","label_nlp4sg":0,"task":null,"method":null,"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"We thank the anonymous reviewers, Yijia Liu and Hai Zhao for their constructive comments, which help to improve the final paper. This work is supported by National Natural Science Foundation of China (NSFC) under grant 61170148, Natural Science Foundation of Heilongjiang Province (China) under grant No.F2016036, the Singapore Ministry of Education (MOE) AcRF Tier 2 grant T2MOE201301 and SRG ISTD 2012 038 from Singapore University of Technology and Design. Yue Zhang is the corresponding author.","year":2016,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} -{"ID":"nagao-1995-future","url":"https:\/\/aclanthology.org\/1995.mtsummit-1.33","title":"What have we to do for the future of MT systems?","abstract":"translations because delicate translations are difficult by grammatical rules. 2. Choice of words and phrases in utterances is strongly influenced by such factors as the relation between the speaker and hearer, context, situation, cultural background and so on. All these factors must be listed up and their functions are to be clarified. 3. We have to go from syntax directed MT to semantic\/context dependent MT. Anaphora, ellipsis, topic\/focus, old\/new information problems should be studied. 4 Completely new MT algorithms must be developed by utilizing the factors mentioned above. 5. MT softwares must be available on PCs and word processors. Those people who use MT systems must be able to exchange their experiences and know-hows through computer network conversations.\nOpen forum on MT must be established on a computer network where everybody can make contribution of any kind.","label_nlp4sg":0,"task":null,"method":null,"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":1995,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} -{"ID":"chalapathy-etal-2016-investigation","url":"https:\/\/aclanthology.org\/W16-6101","title":"An Investigation of Recurrent Neural Architectures for Drug Name Recognition","abstract":"Drug name recognition (DNR) is an essential step in the Pharmacovigilance (PV) pipeline. DNR aims to find drug name mentions in unstructured biomedical texts and classify them into predefined categories. State-of-the-art DNR approaches heavily rely on hand-crafted features and domain-specific resources which are difficult to collect and tune. For this reason, this paper investigates the effectiveness of contemporary recurrent neural architecturesthe Elman and Jordan networks and the bidirectional LSTM with CRF decoding-at performing DNR straight from the text. The experimental results achieved on the authoritative SemEval-2013 Task 9.1 benchmarks show that the bidirectional LSTM-CRF ranks closely to highly-dedicated, hand-crafted systems.","label_nlp4sg":1,"task":null,"method":null,"goal1":"Good Health and Well-Being","goal2":null,"goal3":null,"acknowledgments":null,"year":2016,"sdg1":0,"sdg2":0,"sdg3":1,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} -{"ID":"garcia-diaz-etal-2022-umuteam","url":"https:\/\/aclanthology.org\/2022.dravidianlangtech-1.6","title":"UMUTeam@TamilNLP-ACL2022: Emotional Analysis in Tamil","abstract":"This working notes summarises the participation of the UMUTeam on the TamilNLP (ACL 2022) shared task concerning emotion analysis in Tamil. We participated in the two multiclassification challenges proposed with a neural network that combines linguistic features with different feature sets based on contextual and non-contextual sentence embeddings. Our proposal achieved the 1st result for the second subtask, with an f1-score of 15.1% discerning among 30 different emotions. However, our results for the first subtask were not recorded in the official leader board. Accordingly, we report our results for this subtask with the validation split, reaching a macro f1-score of 32.360%.","label_nlp4sg":0,"task":null,"method":null,"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"This work is part of the research project LaTe4PSP (PID2019-107652RB-I00) funded by MCIN\/ AEI\/10.13039\/501100011033. This work is also part of the research project PDC2021-121112-I00 funded by MCIN\/AEI\/10.13039\/501100011033 and by the European Union NextGenera-tionEU\/PRTR. In addition, Jos\u00e9 Antonio Garc\u00eda-D\u00edaz is supported by Banco Santander and the University of Murcia through the Doctorado Industrial programme.","year":2022,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} -{"ID":"chernyshevich-2014-ihs","url":"https:\/\/aclanthology.org\/S14-2051","title":"IHS R\\&D Belarus: Cross-domain extraction of product features using CRF","abstract":"This paper describes the aspect extraction system submitted by IHS R&D Belarus team at the SemEval-2014 shared task related to Aspect-Based Sentiment Analysis. Our system is based on IHS Goldfire linguistic processor and uses a rich set of lexical, syntactic and statistical features in CRF model. We participated in two domain-specific tasksrestaurants and laptopswith the same system trained on a mixed corpus of reviews. Among submissions of constrained systems from 28 teams, our submission was ranked first in laptop domain and fourth in restaurant domain for the subtask A devoted to aspect extraction.","label_nlp4sg":0,"task":null,"method":null,"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2014,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} -{"ID":"liu-etal-2018-narrative","url":"https:\/\/aclanthology.org\/P18-2045","title":"Narrative Modeling with Memory Chains and Semantic Supervision","abstract":"Story comprehension requires a deep semantic understanding of the narrative, making it a challenging task. Inspired by previous studies on ROC Story Cloze Test, we propose a novel method, tracking various semantic aspects with external neural memory chains while encouraging each to focus on a particular semantic aspect. Evaluated on the task of story ending prediction, our model demonstrates superior performance to a collection of competitive baselines, setting a new state of the art. 1 1 Code available at http:\/\/github.com\/liufly\/ narrative-modeling. Context: Sam loved his old belt. He matched it with everything. Unfortunately he gained too much weight. It became too small. Coherent Ending: Sam went on a diet. Incoherent Ending: Sam was happy.","label_nlp4sg":0,"task":null,"method":null,"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"We thank the anonymous reviewers for their valuable feedback, and gratefully acknowledge the support of Australian Government Research Training Program Scholarship. This work was also supported in part by the Australian Research Council.","year":2018,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} -{"ID":"moeller-etal-2021-pos","url":"https:\/\/aclanthology.org\/2021.acl-long.78","title":"To POS Tag or Not to POS Tag: The Impact of POS Tags on Morphological Learning in Low-Resource Settings","abstract":"Part-of-Speech (POS) tags routinely appear as features in morphological tasks. POS taggers are often one of the first NLP tools developed for low-resource languages. However, as NLP expands to new languages it cannot assume that POS tags will be available to train a POS tagger. This paper empirically examines the impact of POS tags on two morphological tasks with the Transformer architecture. Each task is run twice, once with and once without POS tags, on otherwise identical data from ten well-described languages and five underdocumented languages. We find that the presence or absence of POS tags does not have a significant bearing on the performance of either task. In joint segmentation and glossing, the largest average difference is an .09 improvement in F 1-scores by removing POS tags. In reinflection, the greatest average difference is 1.2% in accuracy for published data and 5% for unpublished data. These results are indicators that NLP and documentary linguistics may benefit each other even when a POS tag set does not yet exist for a language.","label_nlp4sg":0,"task":null,"method":null,"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2021,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} -{"ID":"navigli-2006-meaningful","url":"https:\/\/aclanthology.org\/P06-1014","title":"Meaningful Clustering of Senses Helps Boost Word Sense Disambiguation Performance","abstract":"Fine-grained sense distinctions are one of the major obstacles to successful Word Sense Disambiguation. In this paper, we present a method for reducing the granularity of the WordNet sense inventory based on the mapping to a manually crafted dictionary encoding sense hierarchies, namely the Oxford Dictionary of English. We assess the quality of the mapping and the induced clustering, and evaluate the performance of coarse WSD systems in the Senseval-3 English all-words task.","label_nlp4sg":0,"task":null,"method":null,"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"This work is partially funded by the Interop NoE (508011), 6 th European Union FP. We wish to thank Paola Velardi, Mirella Lapata and Samuel Brody for their useful comments.","year":2006,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} -{"ID":"alt-etal-2019-fine","url":"https:\/\/aclanthology.org\/P19-1134","title":"Fine-tuning Pre-Trained Transformer Language Models to Distantly Supervised Relation Extraction","abstract":"Distantly supervised relation extraction is widely used to extract relational facts from text, but suffers from noisy labels. Current relation extraction methods try to alleviate the noise by multi-instance learning and by providing supporting linguistic and contextual information to more efficiently guide the relation classification. While achieving state-of-the-art results, we observed these models to be biased towards recognizing a limited set of relations with high precision, while ignoring those in the long tail. To address this gap, we utilize a pre-trained language model, the OpenAI Generative Pre-trained Transformer (GPT) (Radford et al., 2018). The GPT and similar models have been shown to capture semantic and syntactic features, and also a notable amount of \"common-sense\" knowledge, which we hypothesize are important features for recognizing a more diverse set of relations. By extending the GPT to the distantly supervised setting, and fine-tuning it on the NYT10 dataset, we show that it predicts a larger set of distinct relation types with high confidence. Manual and automated evaluation of our model shows that it achieves a state-of-the-art AUC score of 0.422 on the NYT10 dataset, and performs especially well at higher recall levels.","label_nlp4sg":0,"task":null,"method":null,"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"We would like to thank the anonymous reviewers for their comments. This research was partially supported by the German Federal Ministry of Education and Research through the projects DEEPLEE (01IW17001) and BBDC2 (01IS18025E), and by the German Federal Ministry of Transport and Digital Infrastructure through the project DAYSTREAM (19F2031A).","year":2019,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} -{"ID":"fung-etal-2003-combining","url":"https:\/\/aclanthology.org\/W03-1203","title":"Combining Optimal Clustering and Hidden Markov Models for Extractive Summarization","abstract":"We propose Hidden Markov models with unsupervised training for extractive summarization. Extractive summarization selects salient sentences from documents to be included in a summary. Unsupervised clustering combined with heuristics is a popular approach because no annotated data is required. However, conventional clustering methods such as K-means do not take text cohesion into consideration. Probabilistic methods are more rigorous and robust, but they usually require supervised training with annotated data. Our method incorporates unsupervised training with clustering, into a probabilistic framework. Clustering is done by modified K-means (MKM)-a method that yields more optimal clusters than the conventional K-means method. Text cohesion is modeled by the transition probabilities of an HMM, and term distribution is modeled by the emission probabilities. The final decoding process tags sentences in a text with theme class labels. Parameter training is carried out by the segmental K-means (SKM) algorithm. The output of our system can be used to extract salient sentences for summaries, or used for topic detection. Content-based evaluation shows that our method outperforms an existing extractive summarizer by 22.8% in terms of relative similarity, and outperforms a baseline summarizer that selects the top N sentences as salient sentences by 46.3%.","label_nlp4sg":0,"task":null,"method":null,"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2003,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} -{"ID":"kann-schutze-2018-neural","url":"https:\/\/aclanthology.org\/D18-1363","title":"Neural Transductive Learning and Beyond: Morphological Generation in the Minimal-Resource Setting","abstract":"Neural state-of-the-art sequence-to-sequence (seq2seq) models often do not perform well for small training sets. We address paradigm completion, the morphological task of, given a partial paradigm, generating all missing forms. We propose two new methods for the minimalresource setting: (i) Paradigm transduction: Since we assume only few paradigms available for training, neural seq2seq models are able to capture relationships between paradigm cells, but are tied to the idiosyncracies of the training set. Paradigm transduction mitigates this problem by exploiting the input subset of inflected forms at test time. (ii) Source selection with high precision (SHIP): Multi-source models which learn to automatically select one or multiple sources to predict a target inflection do not perform well in the minimal-resource setting. SHIP is an alternative to identify a reliable source if training data is limited. On a 52-language benchmark dataset, we outperform the previous state of the art by up to 9.71% absolute accuracy.","label_nlp4sg":0,"task":null,"method":null,"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"We would like to thank Samuel Bowman, Ryan Cotterell, Nikita Nangia, and Alex Warstadt for their feedback on this work.","year":2018,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} -{"ID":"johnson-1997-personal","url":"https:\/\/aclanthology.org\/1997.tc-1.4","title":"Personal Translation Applications","abstract":null,"label_nlp4sg":0,"task":null,"method":null,"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":1997,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} -{"ID":"kanakarajan-etal-2019-saama","url":"https:\/\/aclanthology.org\/W19-5055","title":"Saama Research at MEDIQA 2019: Pre-trained BioBERT with Attention Visualisation for Medical Natural Language Inference","abstract":"Natural Language inference is the task of identifying relation between two sentences as entailment, contradiction or neutrality. MedNLI is a biomedical flavour of NLI for clinical domain. This paper explores the use of Bidirectional Encoder Representation from Transformer (BERT) for solving MedNLI. The proposed model, BERT pre-trained on PMC, PubMed and fine-tuned on MIMIC-III v1.4, achieves state of the art results on MedNLI (83.45%) and an accuracy of 78.5% in MEDIQA challenge. The authors present an analysis of the attention patterns that emerged as a result of training BERT on MedNLI using a visualization tool, bertviz. * *Equal Contribution: Kamal had sole access to MIMIC and MEDIQA data, focussed on the algorithm development and implementation. Suriyadeepan and Archana focussed on the attention visualisation and writing. Soham and Malaikannan focussed on reviewing","label_nlp4sg":1,"task":null,"method":null,"goal1":"Good Health and Well-Being","goal2":null,"goal3":null,"acknowledgments":"The authors would like to thank Bhuvana Kundumani for reviewing the manuscript and for providing her technical inputs. The authors would also like to extend their gratitude to Saama Technologies Inc. for providing the perfect research and innovation environment.","year":2019,"sdg1":0,"sdg2":0,"sdg3":1,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} -{"ID":"lison-etal-2018-opensubtitles2018","url":"https:\/\/aclanthology.org\/L18-1275","title":"OpenSubtitles2018: Statistical Rescoring of Sentence Alignments in Large, Noisy Parallel Corpora","abstract":"Movie and TV subtitles are a highly valuable resource for the compilation of parallel corpora thanks to their availability in large numbers and across many languages. However, the quality of the resulting sentence alignments is often lower than for other parallel corpora. This paper presents a new major release of the OpenSubtitles collection of parallel corpora, which is extracted from a total of 3.7 million subtitles spread over 60 languages. In addition to a substantial increase in the corpus size (about 30 % compared to the previous version), this new release associates explicit quality scores to each sentence alignment. These scores are determined by a feedforward neural network based on simple language-independent features and estimated on a sample of aligned sentence pairs. Evaluation results show that the model is able predict lexical translation probabilities with a root mean square error of 0.07 (coefficient of determination R 2 = 0.47). Based on the scores produced by this regression model, the parallel corpora can be filtered to prune out low-quality alignments.","label_nlp4sg":0,"task":null,"method":null,"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2018,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} -{"ID":"paetzel-etal-2014-multimodal","url":"http:\/\/www.lrec-conf.org\/proceedings\/lrec2014\/pdf\/697_Paper.pdf","title":"A Multimodal Corpus of Rapid Dialogue Games","abstract":"This paper presents a multimodal corpus of spoken human-human dialogues collected as participants played a series of Rapid Dialogue Games (RDGs). The corpus consists of a collection of about 11 hours of spoken audio, video, and Microsoft Kinect data taken from 384 game interactions (dialogues). The games used for collecting the corpus required participants to give verbal descriptions of linguistic expressions or visual images and were specifically designed to engage players in a fast-paced conversation under time pressure. As a result, the corpus contains many examples of participants attempting to communicate quickly in specific game situations, and it also includes a variety of spontaneous conversational phenomena such as hesitations, filled pauses, overlapping speech, and low-latency responses. The corpus has been created to facilitate research in incremental speech processing for spoken dialogue systems. Potentially, the corpus could be used in several areas of speech and language research, including speech recognition, natural language understanding, natural language generation, and dialogue management.","label_nlp4sg":0,"task":null,"method":null,"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2014,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} -{"ID":"htait-etal-2017-lsis","url":"https:\/\/aclanthology.org\/S17-2120","title":"LSIS at SemEval-2017 Task 4: Using Adapted Sentiment Similarity Seed Words For English and Arabic Tweet Polarity Classification","abstract":"We present, in this paper, our contribution in SemEval2017 task 4 : \"Sentiment Analysis in Twitter\", subtask A: \"Message Polarity Classification\", for English and Arabic languages. Our system is based on a list of sentiment seed words adapted for tweets. The sentiment relations between seed words and other terms are captured by cosine similarity between the word embedding representations (word2vec). These seed words are extracted from datasets of annotated tweets available online. Our tests, using these seed words, show significant improvement in results compared to the use of Turney and Littman's (2003) seed words, on polarity classification of tweet messages.","label_nlp4sg":0,"task":null,"method":null,"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"This work was supported by the French program Investissements d'Avenir Equipex \"A digital library for open humanities\" of OpenEdition.org.","year":2017,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} -{"ID":"ahmadi-2020-building","url":"https:\/\/aclanthology.org\/2020.vardial-1.7","title":"Building a Corpus for the Zaza--Gorani Language Family","abstract":"Thanks to the growth of local communities and various news websites along with the increasing accessibility of the Web, some of the endangered and less-resourced languages have a chance to revive in the information era. Therefore, the Web is considered a huge resource that can be used to extract language corpora which enable researchers to carry out various studies in linguistics and language technology. The Zaza-Gorani language family is a linguistic subgroup of the Northwestern Iranian languages for which there is no significant corpus available. Motivated to create one, in this paper we present our endeavour to collect a corpus in Zazaki and Gorani languages containing over 1.6M and 194k word tokens, respectively. This corpus is publicly available 1 .","label_nlp4sg":0,"task":null,"method":null,"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"The author would like to thank the constructive comments of Dr. Ilyas Arslan and Mesut Keskin regarding Zazaki and the invaluable insights of Dr. Parvin Mahmoudveysi regarding Gorani. Likewise, the comments of the anynomous reviewers are very much appreciated.","year":2020,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} -{"ID":"wang-etal-2007-kernel","url":"https:\/\/aclanthology.org\/N07-2047","title":"Kernel Regression Based Machine Translation","abstract":"We present a novel machine translation framework based on kernel regression techniques. In our model, the translation task is viewed as a string-to-string mapping, for which a regression type learning is employed with both the source and the target sentences embedded into their kernel induced feature spaces. We report the experiments on a French-English translation task showing encouraging results.","label_nlp4sg":0,"task":null,"method":null,"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"The authors acknowledge the support of the EU under the IST project No. FP6-033917.","year":2007,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} -{"ID":"karamanolakis-etal-2021-self","url":"https:\/\/aclanthology.org\/2021.naacl-main.66","title":"Self-Training with Weak Supervision","abstract":"State-of-the-art deep neural networks require large-scale labeled training data that is often expensive to obtain or not available for many tasks. Weak supervision in the form of domainspecific rules has been shown to be useful in such settings to automatically generate weakly labeled training data. However, learning with weak rules is challenging due to their inherent heuristic and noisy nature. An additional challenge is rule coverage and overlap, where prior work on weak supervision only considers instances that are covered by weak rules, thus leaving valuable unlabeled data behind. In this work, we develop a weak supervision framework (ASTRA 1) that leverages all the available data for a given task. To this end, we leverage task-specific unlabeled data through self-training with a model (student) that considers contextualized representations and predicts pseudo-labels for instances that may not be covered by weak rules. We further develop a rule attention network (teacher) that learns how to aggregate student pseudo-labels with weak rule labels, conditioned on their fidelity and the underlying context of an instance. Finally, we construct a semi-supervised learning objective for end-to-end training with unlabeled data, domain-specific rules, and a small amount of labeled data. Extensive experiments on six benchmark datasets for text classification demonstrate the effectiveness of our approach with significant improvements over state-of-the-art baselines.","label_nlp4sg":0,"task":null,"method":null,"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"We thank the anonymous reviewers for their constructive feedback, and Wei Wang and Benjamin Van Durme for insightful discussions.","year":2021,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} -{"ID":"hovy-2002-building","url":"https:\/\/aclanthology.org\/W02-1105","title":"Building Semantic\/Ontological Knowledge by Text Mining","abstract":null,"label_nlp4sg":0,"task":null,"method":null,"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2002,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} -{"ID":"hongxu-etal-2004-ebmt","url":"https:\/\/aclanthology.org\/2004.iwslt-evaluation.7","title":"An EBMT system based on word alignment","abstract":"This system is an experiment of examples based approach. It is based on a corpus containing 220 thousand sentence pairs with word alignment. The system contains four parts: matching and search, fragment matching, fragment assembling, evaluation and post processing. We use word alignment information to find and combine fragments.","label_nlp4sg":0,"task":null,"method":null,"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2004,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} -{"ID":"beloucif-etal-2016-improving","url":"https:\/\/aclanthology.org\/W16-4507","title":"Improving word alignment for low resource languages using English monolingual SRL","abstract":"We introduce a new statistical machine translation approach specifically geared to learning translation from low resource languages, that exploits monolingual English semantic parsing to bias inversion transduction grammar (ITG) induction. We show that in contrast to conventional statistical machine translation (SMT) training methods, which rely heavily on phrase memorization, our approach focuses on learning bilingual correlations that help translating low resource languages, by using the output language semantic structure to further narrow down ITG constraints. This approach is motivated by previous research which has shown that injecting a semantic frame based objective function while training SMT models improves the translation quality. We show that including a monolingual semantic objective function during the learning of the translation model leads towards a semantically driven alignment which is more efficient than simply tuning loglinear mixture weights against a semantic frame based evaluation metric in the final stage of statistical machine translation training. We test our approach with three different language pairs and demonstrate that our model biases the learning towards more semantically correct alignments. Both GIZA++ and ITG based techniques fail to capture meaningful bilingual constituents, which is required when trying to learn translation models for low resource languages. In contrast, our proposed model not only improve translation by injecting a monolingual objective function to learn bilingual correlations during early training of the translation model, but also helps to learn more meaningful correlations with a relatively small data set, leading to a better alignment compared to either conventional ITG or traditional GIZA++ based approaches.","label_nlp4sg":0,"task":null,"method":null,"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2016,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} -{"ID":"byrd-chodorow-1985-using","url":"https:\/\/aclanthology.org\/P85-1034","title":"Using an On-Line Dictionary to Find Rhyming Words and Pronunciations for Unknown Words","abstract":"Humans know a great deal about relationships among words. This paper discusses relationships among word pronunciations. We describe a computer system which models human judgement of rhyme by assigning specific roles to the location of primary stress, the similarity of phonetic segments, and other factors. By using the model as an experimental tool, we expect to improve our understanding of rhyme. A related computer model will attempt to generate pronunciations for unknown words by analogy with those for known words. The analogical processes involve techniques for segmenting and matching word spellings, and for mapping spelling to sound in known words. As in the case of rhyme, the computer model will be an important tool for improving our understanding of these processes. Both models serve as the basis for functions in the WordSmith automated dictionary system.","label_nlp4sg":0,"task":null,"method":null,"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"We are grateful to Barbara Kipfer for her preliminary work on the syllabification of unknown words, and to Yael Ravin and Mary Neff for comments on earlier versions of this report.","year":1985,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} -{"ID":"macwhinney-fromm-2014-two","url":"http:\/\/www.lrec-conf.org\/proceedings\/lrec2014\/pdf\/419_Paper.pdf","title":"Two Approaches to Metaphor Detection","abstract":"Methods for automatic detection and interpretation of metaphors have focused on analysis and utilization of the ways in which metaphors violate selectional preferences (Martin, 2006). Detection and interpretation processes that rely on this method can achieve wide coverage and may be able to detect some novel metaphors. However, they are prone to high false alarm rates, often arising from imprecision in parsing and supporting ontological and lexical resources. An alternative approach to metaphor detection emphasizes the fact that many metaphors become conventionalized collocations, while still preserving their active metaphorical status. Given a large enough corpus for a given language, it is possible to use tools like SketchEngine (Kilgariff, Rychly, Smrz, & Tugwell, 2004) to locate these high frequency metaphors for a given target domain. In this paper, we examine the application of these two approaches and discuss their relative strengths and weaknesses for metaphors in the target domain of economic inequality in English, Spanish, Farsi, and Russian.","label_nlp4sg":0,"task":null,"method":null,"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"9.","year":2014,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} -{"ID":"chen-guo-2015-representation","url":"https:\/\/aclanthology.org\/P15-2025","title":"Representation Based Translation Evaluation Metrics","abstract":"Precisely evaluating the quality of a translation against human references is a challenging task due to the flexible word ordering of a sentence and the existence of a large number of synonyms for words. This paper proposes to evaluate translations with distributed representations of words and sentences. We study several metrics based on word and sentence representations and their combination. Experiments on the WMT metric task shows that the metric based on the combined representations achieves the best performance, outperforming the state-of-the-art translation metrics by a large margin. In particular, training the distributed representations only needs a reasonable amount of monolingual, unlabeled data that is not necessary drawn from the test domain.","label_nlp4sg":0,"task":null,"method":null,"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"The authors would like to thank Colin Cherry and Roland Kuhn for useful discussions.","year":2015,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} -{"ID":"goutte-etal-2012-impact","url":"https:\/\/aclanthology.org\/2012.amta-papers.7","title":"The Impact of Sentence Alignment Errors on Phrase-Based Machine Translation Performance","abstract":"When parallel or comparable corpora are harvested from the web, there is typically a tradeoff between the size and quality of the data. In order to improve quality, corpus collection efforts often attempt to fix or remove misaligned sentence pairs. But, at the same time, Statistical Machine Translation (SMT) systems are widely assumed to be relatively robust to sentence alignment errors. However, there is little empirical evidence to support and characterize this robustness. This contribution investigates the impact of sentence alignment errors on a typical phrase-based SMT system. We confirm that SMT systems are highly tolerant to noise, and that performance only degrades seriously at very high noise levels. Our findings suggest that when collecting larger, noisy parallel data for training phrase-based SMT, cleaning up by trying to detect and remove incorrect alignments can actually degrade performance. Although fixing errors, when applicable, is a preferable strategy to removal, its benefits only become apparent for fairly high misalignment rates. We provide several explanations to support these findings.","label_nlp4sg":0,"task":null,"method":null,"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2012,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} -{"ID":"jing-mckeown-2000-cut","url":"https:\/\/aclanthology.org\/A00-2024","title":"Cut and Paste Based Text Summarization","abstract":"We present a cut and paste based text summarizer, which uses operations derived from an analysis of human written abstracts. The summarizer edits extracted sentences, using reduction to remove inessential phrases and combination to merge resuiting phrases together as coherent sentences. Our work includes a statistically based sentence decomposition program that identifies where the phrases of a summary originate in the original document, producing an aligned corpus of summaries and articles which we used to develop the summarizer.","label_nlp4sg":0,"task":null,"method":null,"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"We thank IBM for licensing us the ESG parser and the MITRE corporation for licensing us the coreference resolution system. This material is based upon work supported by the National Science Foundation under Grant No. IRI 96-19124 and IRI 96-18797. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect the views of the National Science Foundation.","year":2000,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} -{"ID":"jo-choi-2018-extrofitting","url":"https:\/\/aclanthology.org\/W18-3003","title":"Extrofitting: Enriching Word Representation and its Vector Space with Semantic Lexicons","abstract":"We propose post-processing method for enriching not only word representation but also its vector space using semantic lexicons, which we call extrofitting. The method consists of 3 steps as follows: (i) Expanding 1 or more dimension(s) on all the word vectors, filling with their representative value. (ii) Transferring semantic knowledge by averaging each representative values of synonyms and filling them in the expanded dimension(s). These two steps make representations of the synonyms close together. (iii) Projecting the vector space using Linear Discriminant Analysis, which eliminates the expanded dimension(s) with semantic knowledge. When experimenting with GloVe, we find that our method outperforms Faruqui's retrofitting on some of word similarity task. We also report further analysis on our method in respect to word vector dimensions, vocabulary size as well as other well-known pretrained word vectors (e.g., Word2Vec, Fasttext).","label_nlp4sg":0,"task":null,"method":null,"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"Thanks for Jaeyoung Kim to discuss this idea. Also, greatly appreciate the reviewers for critical comments.","year":2018,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} -{"ID":"velldal-etal-2017-joint","url":"https:\/\/aclanthology.org\/W17-0201","title":"Joint UD Parsing of Norwegian Bokm\\aal and Nynorsk","abstract":"This paper investigates interactions in parser performance for the two official standards for written Norwegian: Bokm\u00e5l and Nynorsk. We demonstrate that while applying models across standards yields poor performance, combining the training data for both standards yields better results than previously achieved for each of them in isolation. This has immediate practical value for processing Norwegian, as it means that a single parsing pipeline is sufficient to cover both varieties, with no loss in accuracy. Based on the Norwegian Universal Dependencies treebank we present results for multiple taggers and parsers, experimenting with different ways of varying the training data given to the learners, including the use of machine translation.","label_nlp4sg":0,"task":null,"method":null,"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2017,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} -{"ID":"guo-etal-2020-cyclegt","url":"https:\/\/aclanthology.org\/2020.webnlg-1.8","title":"CycleGT: Unsupervised Graph-to-Text and Text-to-Graph Generation via Cycle Training","abstract":"Two important tasks at the intersection of knowledge graphs and natural language processing are graph-to-text (G2T) and text-tograph (T2G) conversion. Due to the difficulty and high cost of data collection, the supervised data available in the two fields are usually on the magnitude of tens of thousands, for example, 18K in the WebNLG 2017 dataset after preprocessing, which is far fewer than the millions of data for other tasks such as machine translation. Consequently, deep learning models for G2T and T2G suffer largely from scarce training data. We present CycleGT, an unsupervised training method that can bootstrap from fully non-parallel graph and text data, and iteratively back translate between the two forms. Experiments on WebNLG datasets show that our unsupervised model trained on the same number of data achieves performance on par with several fully supervised models. Further experiments on the non-parallel Gen-Wiki dataset verify that our method performs the best among unsupervised baselines. This validates our framework as an effective approach to overcome the data scarcity problem in the fields of G2T and T2G. 1 * Equal contribution. \u2020 Work done during internship at Amazon Shanghai AI Lab.","label_nlp4sg":0,"task":null,"method":null,"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"We thank colleagues at the Amazon Shanghai AI lab, including Xiangkun Hu, Hang Yan, and many others for insightful discussions that constructively helped this work.","year":2020,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} -{"ID":"schiehlen-2004-annotation","url":"https:\/\/aclanthology.org\/C04-1056","title":"Annotation Strategies for Probabilistic Parsing in German","abstract":null,"label_nlp4sg":0,"task":null,"method":null,"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2004,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} -{"ID":"belz-etal-2022-quantified","url":"https:\/\/aclanthology.org\/2022.acl-long.2","title":"Quantified Reproducibility Assessment of NLP Results","abstract":"This paper describes and tests a method for carrying out quantified reproducibility assessment (QRA) that is based on concepts and definitions from metrology. QRA produces a single score estimating the degree of reproducibility of a given system and evaluation measure, on the basis of the scores from, and differences between, different reproductions. We test QRA on 18 system and evaluation measure combinations (involving diverse NLP tasks and types of evaluation), for each of which we have the original results and one to seven reproduction results. The proposed QRA method produces degree-of-reproducibility scores that are comparable across multiple reproductions not only of the same, but of different original studies. We find that the proposed method facilitates insights into causes of variation between reproductions, and allows conclusions to be drawn about what changes to system and\/or evaluation design might lead to improved reproducibility.","label_nlp4sg":0,"task":null,"method":null,"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"We are grateful to the anonymous reviewers and area chairs for their exceptionally detailed and helpful feedback.Popovi\u0107's work on this s study was funded by the ADAPT SFI Centre for Digital Media Technology which is funded by Science Foundation Ireland through the SFI Research Centres Programme, and co-funded under the European Regional Development Fund (ERDF) through Grant 13\/RC\/2106. Mille's work was supported by the European Commission under the H2020 program contract numbers 786731, 825079, 870930 and 952133.","year":2022,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} -{"ID":"zelenko-etal-2002-kernel","url":"https:\/\/aclanthology.org\/W02-1010","title":"Kernel Methods for Relation Extraction","abstract":null,"label_nlp4sg":0,"task":null,"method":null,"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2002,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} -{"ID":"palomar-etal-2001-algorithm","url":"https:\/\/aclanthology.org\/J01-4005","title":"An Algorithm for Anaphora Resolution in Spanish Texts","abstract":"This paper presents an algorithm for identifying noun phrase antecedents of third person personal pronouns, demonstrative pronouns, reflexive pronouns, and omitted pronouns (zero pronouns) in unrestricted Spanish texts. We define a list of constraints and preferences for different types of pronominal expressions, and we document in detail the importance of each kind of knowledge (lexical, morphological, syntactic, and statistical) in anaphora resolution for Spanish. The paper also provides a definition for syntactic conditions on Spanish NP-pronoun noncoreference using partial parsing. The algorithm has been evaluated on a corpus of 1,677 pronouns and achieved a success rate of 76.8%. We have also implemented four competitive algorithms and tested their performance in a blind evaluation on the same test corpus. This new approach could easily be extended to other languages such as English, Portuguese, Italian, or Japanese.","label_nlp4sg":0,"task":null,"method":null,"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"The authors wish to thank Ferran Pla, Natividad Prieto, and Antonio Molina for contributing their tagger (Pla 2000) ; and Richard Evans, Mikel Forcada, and Rafael Carrasco for their helpful revisions of the ideas presented in this paper. We are also grateful to several anonymous reviewers of Computational Linguistics for helpful comments on earlier drafts of this paper. Our work has been supported by the Spanish government (CICYT) with Grant TIC97-0671-C02-01\/02.","year":2001,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} -{"ID":"wilks-1993-developments","url":"https:\/\/aclanthology.org\/1993.tc-1.1","title":"Developments in machine translation research in the US","abstract":"The paper argues that the IBM statistical approach to machine translation has done rather better after a few years than many sceptics believed it could. However, it is neither as novel as its proponents suggest nor is it making claims as clear and simple as they would have us believe. The performance of the purely statistical system (and we discuss what that phrase could mean) has not equalled the performance of SYSTRAN. More importantly, the system is now being shifted to a hybrid that incorporates much of the linguistic information that it was initially claimed by IBM would not be needed for MT. Hence, one might infer that its own proponent do not believe \"pure\" statistics sufficient for MT of a usable quality. In addition to real limits on the statistical method, there are also strong economic limits imposed by their methodology of data gathering. However, the paper concludes that the IBM group have done the field a great service in pushing these methods far further than before, and by reminding everyone of the virtues of empiricism in the field and the need for large scale gathering of data.","label_nlp4sg":1,"task":null,"method":null,"goal1":"Industry, Innovation and Infrastructure","goal2":null,"goal3":null,"acknowledgments":"Acknowledgements: James Pustejovsky, Bob Ingria, Bran Boguraev, Sergei Nirenburg, Ted Dunning and others in the CRL natural language processing group.","year":1993,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":1,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} -{"ID":"ohashi-etal-2020-tiny","url":"https:\/\/aclanthology.org\/2020.coling-main.103","title":"Tiny Word Embeddings Using Globally Informed Reconstruction","abstract":"We reduce the model size of pre-trained word embeddings by a factor of 200 while preserving its quality. Previous studies in this direction created a smaller word embedding model by reconstructing pre-trained word representations from those of subwords, which allows to store only a smaller number of subword embeddings in the memory. However, previous studies that train the reconstruction models using only target words cannot reduce the model size extremely while preserving its quality. Inspired by the observation of words with similar meanings having similar embeddings, our reconstruction training learns the global relationships among words, which can be employed in various models for word embedding reconstruction. Experimental results on word similarity benchmarks show that the proposed method improves the performance of the all subword-based reconstruction models.","label_nlp4sg":0,"task":null,"method":null,"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2020,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} -{"ID":"li-etal-2015-improving-event","url":"https:\/\/aclanthology.org\/W15-4502","title":"Improving Event Detection with Abstract Meaning Representation","abstract":"Event Detection (ED) aims to identify instances of specified types of events in text, which is a crucial component in the overall task of event extraction. The commonly used features consist of lexical, syntactic, and entity information, but the knowledge encoded in the Abstract Meaning Representation (AMR) has not been utilized in this task. AMR is a semantic formalism in which the meaning of a sentence is encoded as a rooted, directed, acyclic graph. In this paper, we demonstrate the effectiveness of AMR to capture and represent the deeper semantic contexts of the trigger words in this task. Experimental results further show that adding AMR features on top of the traditional features can achieve 67.8% (with 2.1% absolute improvement) F-measure (F 1), which is comparable to the state-of-the-art approaches.","label_nlp4sg":0,"task":null,"method":null,"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2015,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} -{"ID":"jha-etal-2018-bag","url":"https:\/\/aclanthology.org\/N18-3019","title":"Bag of Experts Architectures for Model Reuse in Conversational Language Understanding","abstract":"Slot tagging, the task of detecting entities in input user utterances, is a key component of natural language understanding systems for personal digital assistants. Since each new domain requires a different set of slots, the annotation costs for labeling data for training slot tagging models increases rapidly as the number of domains grow. To tackle this, we describe Bag of Experts (BoE) architectures for model reuse for both LSTM and CRF based models. Extensive experimentation over a dataset of 10 domains drawn from data relevant to our commercial personal digital assistant shows that our BoE models outperform the baseline models with a statistically significant average margin of 5.06% in absolute F1score when training with 2000 instances per domain, and achieve an even higher improvement of 12.16% when only 25% of the training data is used.","label_nlp4sg":0,"task":null,"method":null,"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"We would like to thank Ahmed El Kholy for his comments and feedback on an earlier version of this paper. Also, thanks to Kyle Williams and Zhaleh Feizollahi for their help with code and data collection.","year":2018,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} -{"ID":"shaprin-etal-2019-team","url":"https:\/\/aclanthology.org\/S19-2176","title":"Team Jack Ryder at SemEval-2019 Task 4: Using BERT Representations for Detecting Hyperpartisan News","abstract":"We describe the system submitted by the Jack Ryder team to SemEval-2019 Task 4 on Hyperpartisan News Detection. The task asked participants to predict whether a given article is hyperpartisan, i.e., extreme-left or extremeright. We propose an approach based on BERT with fine-tuning, which was ranked 7th out 28 teams on the distantly supervised dataset, where all articles from a hyperpartisan\/nonhyperpartisan news outlet are considered to be hyperpartisan\/non-hyperpartisan. On a manually annotated test dataset, where human annotators double-checked the labels, we were ranked 29th out of 42 teams.","label_nlp4sg":1,"task":null,"method":null,"goal1":"Peace, Justice and Strong Institutions","goal2":null,"goal3":null,"acknowledgments":null,"year":2019,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":1,"sdg17":0} -{"ID":"mugelli-etal-2017-designing","url":"https:\/\/aclanthology.org\/W17-7011","title":"Designing an Ontology for the Study of Ritual in Ancient Greek Tragedy","abstract":"We examine the use of an ontology within the context of a system for the annotation and querying of ancient Greek tragic texts. This ontology in question results from the reorganisation of a tagset that was originally used in the annotation of a corpus of tragic texts for salient information regarding ritual and religion and its representation in Greek tragedy. In the article we discuss the original tagset as as providing examples of the annotation. We also describe the structure of the ontology itself as well as its use within a system for querying the annotated corpus.","label_nlp4sg":0,"task":null,"method":null,"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2017,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} -{"ID":"malmasi-dras-2015-language","url":"https:\/\/aclanthology.org\/W15-5407","title":"Language Identification using Classifier Ensembles","abstract":"In this paper we describe the language identification system we developed for the Discriminating Similar Languages (DSL) 2015 shared task. We constructed a classifier ensemble composed of several Support Vector Machine (SVM) base classifiers, each trained on a single feature type. Our feature types include character 1-6 grams and word unigrams and bigrams. Using this system we were able to outperform the other entries in the closed training track of the DSL 2015 shared task, achieving the best accuracy of 95.54%.","label_nlp4sg":0,"task":null,"method":null,"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2015,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} -{"ID":"rafferty-etal-2011-exploring","url":"https:\/\/aclanthology.org\/W11-0606","title":"Exploring the Relationship Between Learnability and Linguistic Universals","abstract":"Greater learnability has been offered as an explanation as to why certain properties appear in human languages more frequently than others. Languages with greater learnability are more likely to be accurately transmitted from one generation of learners to the next. We explore whether such a learnability bias is sufficient to result in a property becoming prevalent across languages by formalizing language transmission using a linear model. We then examine the outcome of repeated transmission of languages using a mathematical analysis, a computer simulation, and an experiment with human participants, and show several ways in which greater learnability may not result in a property becoming prevalent. Both the ways in which transmission failures occur and the relative number of languages with and without a property can affect whether the relationship between learnability and prevalence holds. Our results show that simply finding a learnability bias is not sufficient to explain why a particular property is a linguistic universal, or even frequent among human languages.","label_nlp4sg":0,"task":null,"method":null,"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"Acknowledgements. This work was supported by an NSF Graduate Research Fellowship to ANR, grant number BCS-0704034 from the NSF to TLG, and grant number T32 NS047987 from the NIH to ME.","year":2011,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} -{"ID":"susanto-etal-2016-learning","url":"https:\/\/aclanthology.org\/D16-1225","title":"Learning to Capitalize with Character-Level Recurrent Neural Networks: An Empirical Study","abstract":"In this paper, we investigate case restoration for text without case information. Previous such work operates at the word level. We propose an approach using character-level recurrent neural networks (RNN), which performs competitively compared to language modeling and conditional random fields (CRF) approaches. We further provide quantitative and qualitative analysis on how RNN helps improve truecasing.","label_nlp4sg":0,"task":null,"method":null,"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"We would also like to thank the anonymous reviewers for their helpful comments. This work is supported by MOE Tier 1 grant SUTDT12015008.","year":2016,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} -{"ID":"bayerl-paul-2011-determines","url":"https:\/\/aclanthology.org\/J11-4004","title":"What Determines Inter-Coder Agreement in Manual Annotations? A Meta-Analytic Investigation","abstract":"Recent discussions of annotator agreement have mostly centered around its calculation and interpretation, and the correct choice of indices. Although these discussions are important, they only consider the \"back-end\" of the story, namely, what to do once the data are collected. Just as important in our opinion is to know how agreement is reached in the first place and what factors influence coder agreement as part of the annotation process or setting, as this knowledge can provide concrete guidelines for the planning and setup of annotation projects. To investigate whether there are factors that consistently impact annotator agreement we conducted a meta-analytic investigation of annotation studies reporting agreement percentages. Our meta-analysis synthesized factors reported in 96 annotation studies from three domains (word-sense disambiguation, prosodic transcriptions, and phonetic transcriptions) and was based on a total of 346 agreement indices. Our analysis identified seven factors that influence reported agreement values: annotation domain, number of categories in a coding scheme, number of annotators in a project, whether annotators received training, the intensity of annotator training, the annotation purpose, and the method used for the calculation of percentage agreements. Based on our results we develop practical recommendations for the assessment, interpretation, calculation, and reporting of coder agreement. We also briefly discuss theoretical implications for the concept of annotation quality.","label_nlp4sg":0,"task":null,"method":null,"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2011,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} -{"ID":"li-etal-2019-oppo","url":"https:\/\/aclanthology.org\/2019.iwslt-1.2","title":"OPPO NMT System for IWSLT 2019","abstract":"This paper illustrates the OPPO's submission for IWSLT2019 text translation task Our system is based on Transformer architecture. Besides, we also study the effect of model ensembling. On the devsets of IWSLT 2019, the BLEU of our system reaches 19.94.","label_nlp4sg":0,"task":null,"method":null,"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2019,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} -{"ID":"sinha-2007-using","url":"https:\/\/aclanthology.org\/2007.mtsummit-papers.57","title":"Using rich morphology in resolving certain Hindi-English machine translation divergence","abstract":"Identification and resolution of translation divergence (TD) is very crucial for any automated machine translation (MT) system. Although this problem has received attention of a number of MT developers, devising general strategies is hard to achieve. Solution to the language specific pairs appears to be comparatively tractable. In this paper, we present a technique that exploits the rich morphology of Hindi to identify the nature of certain divergence patterns and then invoke methods to handle the related translation divergence in Hindi to English machine translation. We have considered TDs encountered in Hindi copula sentences and those arising out of certain gaps in verb morphology.","label_nlp4sg":0,"task":null,"method":null,"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2007,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} -{"ID":"gorman-curran-2005-approximate","url":"https:\/\/aclanthology.org\/W05-1011","title":"Approximate Searching for Distributional Similarity","abstract":"Distributional similarity requires large volumes of data to accurately represent infrequent words. However, the nearestneighbour approach to finding synonyms suffers from poor scalability. The Spatial Approximation Sample Hierarchy (SASH), proposed by Houle (2003b), is a data structure for approximate nearestneighbour queries that balances the efficiency\/approximation trade-off. We have intergrated this into an existing distributional similarity system, tripling efficiency with a minor accuracy penalty.","label_nlp4sg":0,"task":null,"method":null,"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"We would like to thank the anonymous reviewers for their helpful feedback and corrections. This work has been supported by the Australian Research Council under Discovery Project DP0453131.","year":2005,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} -{"ID":"grishman-2011-invited","url":"https:\/\/aclanthology.org\/W11-4001","title":"INVITED TALK 1: The Knowledge Base Population Task: Challenges for Information Extraction","abstract":"The Knowledge Base Population (KBP) task, being run for the past 3 years by the U.S. National Institute of Standards and Technology, is the latest in a series of multi-site evaluations of information extraction, following in the tradition of MUC and ACE. We examine the structure of KBP, emphasizing the basic shift from sentence-by-sentence and document-by-document evaluation to corpus-based extraction and the challenges it raises for cross-sentence and cross-document processing. We consider the problems raised by the limited amount and incompleteness of the training data, and how this has been (partly) addressed through such methods as semi-supervised learning and distant supervision. We describe some of the optional tasks which have been included-rapid task adaptation (last year), temporal analysis (this year), cross-lingual extraction (planned for next year)-and others which have been suggested.","label_nlp4sg":0,"task":null,"method":null,"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2011,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} -{"ID":"sun-etal-2019-hierarchical","url":"https:\/\/aclanthology.org\/D19-1045","title":"Hierarchical Attention Prototypical Networks for Few-Shot Text Classification","abstract":"Most of the current effective methods for text classification task are based on large-scale labeled data and a great number of parameters, but when the supervised training data are few and difficult to be collected, these models are not available. In this paper, we propose a hierarchical attention prototypical networks (HAPN) for few-shot text classification. We design the feature level, word level, and instance level multi cross attention for our model to enhance the expressive ability of semantic space. We verify the effectiveness of our model on two standard benchmark fewshot text classification datasets-FewRel and CSID, and achieve the state-of-the-art performance. The visualization of hierarchical attention layers illustrates that our model can capture more important features, words, and instances separately. In addition, our attention mechanism increases support set augmentability and accelerates convergence speed in the training stage.","label_nlp4sg":0,"task":null,"method":null,"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"We would like to thank Sawyer Zeng and Yue Liu for providing valuable hardware support and useful advice, and thank Xuexiang Xu and Yang Bai for helping us test online FewRel dataset. This work is also supported by the National Key Research and Development Program of China (No. 2018YFB1402902 and No. 2018YFB1403002) and the Natural Science Foundation of Jiangsu Province (No. BK20151132).","year":2019,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} -{"ID":"gui-etal-2016-event","url":"https:\/\/aclanthology.org\/D16-1170","title":"Event-Driven Emotion Cause Extraction with Corpus Construction","abstract":"In this paper, we present our work in emotion cause extraction. Since there is no open dataset available, the lack of annotated resources has limited the research in this area. Thus, we first present a dataset we built using SINA city news. The annotation is based on the scheme of the W3C Emotion Markup Language. Second, we propose a 7-tuple definition to describe emotion cause events. Based on this general definition, we propose a new event-driven emotion cause extraction method using multi-kernel SVMs where a syntactical tree based approach is used to represent events in text. A convolution kernel based multikernel SVM are used to extract emotion causes. Because traditional convolution kernels do not use lexical information at the terminal nodes of syntactic trees, we modify the kernel function with a synonym based improvement. Even with very limited training data, we can still extract sufficient features for the task. Evaluations show that our approach achieves 11.6% higher F-measure compared to referenced methods. The contributions of our work include resource construction, concept definition and algorithm development.","label_nlp4sg":0,"task":null,"method":null,"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2016,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} -{"ID":"karimi-etal-2018-extracting","url":"https:\/\/aclanthology.org\/L18-1549","title":"Extracting an English-Persian Parallel Corpus from Comparable Corpora","abstract":"Parallel data are an important part of a reliable Statistical Machine Translation (SMT) system. The more of these data are available, the better the quality of the SMT system. However, for some language pairs such as Persian-English, parallel sources of this kind are scarce. In this paper, a bidirectional method is proposed to extract parallel sentences from English and Persian document aligned Wikipedia. Two machine translation systems are employed to translate from Persian to English and the reverse after which an IR system is used to measure the similarity of the translated sentences. Adding the extracted sentences to the training data of the existing SMT systems is shown to improve the quality of the translation. Furthermore, the proposed method slightly outperforms the one-directional approach. The extracted corpus consists of about 200,000 sentences which have been sorted by their degree of similarity calculated by the IR system and is freely available for public access on the Web 1 .","label_nlp4sg":0,"task":null,"method":null,"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"We would like to thank our colleagues, Zahra Sepehri and Ailar Qaraie, at Iranzamin Language School for providing us with 500 sentences used in our test set.","year":2018,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} -{"ID":"milward-1994-non","url":"https:\/\/aclanthology.org\/C94-2151","title":"Non-Constituent Coordination: Theory and Practice","abstract":"ABSTR.AC'I? l)espite tile la.rge ainounl, of theoretical work done on non-coastituent coordination chu:ing the last t.wo decades, lrlany co[npitt, atiollal systems still treat co or(lination using ada.pted parsing st, rategies, in a sirlriilar fashion to the SYSCON,I system develol)ed tbr A!I'Ns. This 1)a.per reviews the i.heoretical literal;ure, a.nd shows why IIla.liy of I, he theoretical ;u:couu(.s tictualiy ]lave worse coverage than ac(;Otllt[;s based on l)ro(:e.ssing. IPiimlly> it shows how l)rocessiug a.ceounts (:IAi he described fornmlly and dcclara.tively in terlns o1' I)yna.mic (', ramma.rs.","label_nlp4sg":0,"task":null,"method":null,"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":1994,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} -{"ID":"rehbein-van-genabith-2007-treebank","url":"https:\/\/aclanthology.org\/D07-1066","title":"Treebank Annotation Schemes and Parser Evaluation for German","abstract":"Recent studies focussed on the question whether less-configurational languages like German are harder to parse than English, or whether the lower parsing scores are an artefact of treebank encoding schemes and data structures, as claimed by K\u00fcbler et al. (2006). This claim is based on the assumption that PARSEVAL metrics fully reflect parse quality across treebank encoding schemes. In this paper we present new experiments to test this claim. We use the PARSEVAL metric, the Leaf-Ancestor metric as well as a dependency-based evaluation, and present novel approaches measuring the effect of controlled error insertion on treebank trees and parser output. We also provide extensive past-parsing crosstreebank conversion. The results of the experiments show that, contrary to K\u00fcbler et al. (2006), the question whether or not German is harder to parse than English remains undecided.","label_nlp4sg":0,"task":null,"method":null,"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"We would like to thank the anomymous reviewers for many helpful comments. This research has been supported by a Science Foundation Ireland grant 04|IN|I527.","year":2007,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} -{"ID":"huerta-2008-relative","url":"https:\/\/aclanthology.org\/D08-1101","title":"Relative Rank Statistics for Dialog Analysis","abstract":"We introduce the relative rank differential statistic which is a non-parametric approach to document and dialog analysis based on word frequency rank-statistics. We also present a simple method to establish semantic saliency in dialog, documents, and dialog segments using these word frequency rank statistics. Applications of our technique include the dynamic tracking of topic and semantic evolution in a dialog, topic detection, automatic generation of document tags, and new story or event detection in conversational speech and text. Our approach benefits from the robustness, simplicity and efficiency of non-parametric and rank based approaches and consistently outperformed term-frequency and TF-IDF cosine distance approaches in several experiments conducted.","label_nlp4sg":0,"task":null,"method":null,"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2008,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} -{"ID":"p-r-etal-2016-hitachi","url":"https:\/\/aclanthology.org\/S16-1191","title":"Hitachi at SemEval-2016 Task 12: A Hybrid Approach for Temporal Information Extraction from Clinical Notes","abstract":"This paper describes the system developed for the task of temporal information extraction from clinical narratives in the context of 2016 Clinical TempEval challenge. Clinical TempEval 2016 addressed the problem of temporal reasoning in clinical domain by providing annotated clinical notes and pathology reports similar to Clinical TempEval challenge 2015. The Clinical TempEval challenge consisted of six subtasks. Hitachi team participated in two time expression based subtasks: time expression span detection (TS) and time expression attribute identification (TA) for which we developed hybrid of rule-based and machine learning based methods using Stanford TokensRegex framework and Stanford Named Entity Recognizer and evaluated it on the THYME corpus. Our hybrid system achieved a maximum F-score of 0.73 for identification of time spans (TS) and 0.71 for identification of time attributes (TA).","label_nlp4sg":1,"task":null,"method":null,"goal1":"Good Health and Well-Being","goal2":null,"goal3":null,"acknowledgments":"We thank Mayo clinic and clinical TempEval organizers for providing access to THYME corpus and other helps provided for our participation in the competition.","year":2016,"sdg1":0,"sdg2":0,"sdg3":1,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} -{"ID":"sosea-caragea-2021-emlm","url":"https:\/\/aclanthology.org\/2021.acl-short.38","title":"eMLM: A New Pre-training Objective for Emotion Related Tasks","abstract":"Bidirectional Encoder Representations from Transformers (BERT) have been shown to be extremely effective on a wide variety of natural language processing tasks, including sentiment analysis and emotion detection. However, the proposed pre-training objectives of BERT do not induce any sentiment or emotion-specific biases into the model. In this paper, we present Emotion Masked Language Modeling, a variation of Masked Language Modeling, aimed at improving the BERT language representation model for emotion detection and sentiment analysis tasks. Using the same pre-training corpora as the original BERT model, Wikipedia and BookCorpus, our BERT variation manages to improve the downstream performance on 4 tasks for emotion detection and sentiment analysis by an average of 1.2% F1. Moreover, our approach shows an increased performance in our task-specific robustness tests. We make our code and pre-trained model available at https:\/\/github.com\/tsosea2\/eMLM.","label_nlp4sg":0,"task":null,"method":null,"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"We thank our anonymous reviewers for their constructive comments and feedback. This work is partially supported by the NSF Grants IIS-1912887 and IIS-1903963. Any opinions, findings, and conclusions expressed here are those of the authors and do not necessarily reflect the views of NSF. The computation for this project was performed on Amazon Web Services through a research grant.","year":2021,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} -{"ID":"duong-etal-2014-get","url":"https:\/\/aclanthology.org\/D14-1096","title":"What Can We Get From 1000 Tokens? A Case Study of Multilingual POS Tagging For Resource-Poor Languages","abstract":"We unintentionally misrepresented Garrette et al. (2013) in the published version of this paper by stating that they required an external tag dictionary. We have corrected these inaccuracies to reflect their modest data requirements.","label_nlp4sg":0,"task":null,"method":null,"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"We would like to thank Dan Garreette, Jason Baldridge and Noah Smith for Malagasy and Kinyarwanda datasets. This work was supported by the University of Melbourne and National ICT Australia (NICTA). NICTA is funded by the Australian Federal and Victoria State Governments, and the Australian Research Council through the ICT Centre of Excellence program. Dr Cohn is the recipient of an Australian Research Council Future Fellowship (project number FT130101105).","year":2014,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} -{"ID":"ortega-etal-2019-adviser","url":"https:\/\/aclanthology.org\/P19-3016","title":"ADVISER: A Dialog System Framework for Education \\& Research","abstract":"In this paper, we present ADVISER 1-an open source dialog system framework for education and research purposes. This system supports multi-domain task-oriented conversations in two languages. It additionally provides a flexible architecture in which modules can be arbitrarily combined or exchanged-allowing for easy switching between rules-based and neural network based implementations. Furthermore, ADVISER offers a transparent, user-friendly framework designed for interdisciplinary collaboration: from a flexible back end, allowing easy integration of new features, to an intuitive graphical user interface supporting nontechnical users.","label_nlp4sg":1,"task":null,"method":null,"goal1":"Quality Education","goal2":"Industry, Innovation and Infrastructure","goal3":null,"acknowledgments":"We would like to thank all the voluntary students at the University of Stuttgart for their participation in the evaluation. This work was funded by the Carl Zeiss Foundation.","year":2019,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":1,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":1,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} -{"ID":"kuo-etal-2010-using","url":"https:\/\/aclanthology.org\/O10-5003","title":"Using Linguistic Features to Predict Readability of Short Essays for Senior High School Students in Taiwan","abstract":"We investigated the problem of classifying short essays used in comprehension tests for senior high school students in Taiwan. The tests were for first and second year students, so the answers included only four categories, each for one semester of the first two years. A random-guess approach would achieve only 25% in accuracy for our problem. We analyzed three publicly available scores for readability, but did not find them directly applicable. By considering a wide array of features at the levels of word, sentence, and essay, we gradually improved the F measure achieved by our classifiers from 0.381 to 0.536.","label_nlp4sg":1,"task":null,"method":null,"goal1":"Quality Education","goal2":null,"goal3":null,"acknowledgments":"The work was supported in part by the funding from the National Science Council in Taiwan under the contracts NSC-97-2221-004-007, NSC-98-2815-C-004-003-E, and NSC-99-2221-004-007. The authors would like to thank Miss Min-Hua Lai for her technical support in this study and Professor Zhao-Ming Gao for his comments on an earlier report (Kuo et al., 2009) ","year":2010,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":1,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} -{"ID":"hall-nemec-2007-generation","url":"https:\/\/aclanthology.org\/W07-0408","title":"Generation in Machine Translation from Deep Syntactic Trees","abstract":"In this paper we explore a generative model for recovering surface syntax and strings from deep-syntactic tree structures. Deep analysis has been proposed for a number of language and speech processing tasks, such as machine translation and paraphrasing of speech transcripts. In an effort to validate one such formalism of deep syntax, the Praguian Tectogrammatical Representation (TR), we present a model of synthesis for English which generates surface-syntactic trees as well as strings. We propose a generative model for function word insertion (prepositions, definite\/indefinite articles, etc.) and subphrase reordering. We show by way of empirical results that this model is effective in constructing acceptable English sentences given impoverished trees.","label_nlp4sg":0,"task":null,"method":null,"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2007,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} -{"ID":"voutilainen-1995-syntax","url":"https:\/\/aclanthology.org\/E95-1022","title":"A syntax-based part-of-speech analyser","abstract":"There are two main methodologies for constructing the knowledge base of a natural language analyser: the linguistic and the data-driven. Recent state-ofthe-art part-of-speech taggers are based on the data-driven approach. Because of the known feasibility of the linguistic rule-based approach at related levels of description, the success of the datadriven approach in part-of-speech analysis may appear surprising. In this paper, a case is made for the syntactic nature of part-of-speech tagging. A new tagger of English that uses only linguistic distributional rules is outlined and empirically evaluated. Tested against a benchmark corpus of 38,000 words of previously unseen text, this syntax-based system reaches an accuracy of above 99%. Compared to the 95-97% accuracy of its best competitors, this result suggests the feasibility of the linguistic approach also in part-of-speech analysis.","label_nlp4sg":0,"task":null,"method":null,"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"I would like to thank Timo J\u00a3rvinen, Jussi Piitulainen, Past Tapanainen and two EACL referees for useful comments on an earlier version of this paper. The usual disclaimers hold.","year":1995,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} -{"ID":"mirovsky-etal-2012-tectogrammatics","url":"https:\/\/aclanthology.org\/C12-2083","title":"Does Tectogrammatics Help the Annotation of Discourse?","abstract":"In the following paper, we discuss and evaluate the benefits that deep syntactic trees (tectogrammatics) and all the rich annotation of the Prague Dependency Treebank bring to the process of annotating the discourse structure, i.e. discourse relations, connectives and their arguments. The decision to annotate discourse structure directly on the trees contrasts with the majority of similarly aimed projects, usually based on the annotation of linear texts. Our basic assumption is that some syntactic features of a sentence analysis correspond to certain discourselevel features. Hence, we use some properties of the dependency-based large-scale treebank of Czech to help establish an independent annotation layer of discourse. The question that we answer in the paper is how much did we gain by employing this approach.","label_nlp4sg":0,"task":null,"method":null,"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"We gratefully acknowledge support from the Grant Agency of the Czech Republic (grants P406\/12\/0658 and P406\/2010\/0875) and from the Ministry of Education, Youth and Sports in the Czech Republic, program KONTAKT (ME10018) and the LINDAT-Clarin project (LM2010013).","year":2012,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} -{"ID":"mihalcea-etal-2004-senseval","url":"https:\/\/aclanthology.org\/W04-0807","title":"The Senseval-3 English lexical sample task","abstract":"This paper presents the task definition, resources, participating systems, and comparative results for the English lexical sample task, which was organized as part of the SENSEVAL-3 evaluation exercise. The task drew the participation of 27 teams from around the world, with a total of 47 systems.","label_nlp4sg":0,"task":null,"method":null,"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"Many thanks to all those who contributed to the Open Mind Word Expert project, making this task possible. In particular, we are grateful to Gwen Lenker -our most productive contributor. We are also grateful to all the participants in this task, for their hard work and involvement in this evaluation exercise. Without them, all these comparative analyses would not be possible.We are indebted to the Princeton WordNet team, for making WordNet available free of charge, and to Robert Parks from Wordsmyth, for making available the verb entries used in this evaluation.We are particularly grateful to the National Science Foundation for their support under research grant IIS-0336793, and to the University of North Texas for a research grant that provided funding for contributor prizes.","year":2004,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} -{"ID":"calixto-etal-2017-human","url":"https:\/\/aclanthology.org\/W17-2004","title":"Human Evaluation of Multi-modal Neural Machine Translation: A Case-Study on E-Commerce Listing Titles","abstract":"In this paper, we study how humans perceive the use of images as an additional knowledge source to machine-translate usergenerated product listings in an e-commerce company. We conduct a human evaluation where we assess how a multi-modal neural machine translation (NMT) model compares to two text-only approaches: a conventional state-of-the-art attention-based NMT and a phrase-based statistical machine translation (PBSMT) model. We evaluate translations obtained with different systems and also discuss the data set of user-generated product listings, which in our case comprises both product listings and associated images. We found that humans preferred translations obtained with a PBSMT system to both text-only and multi-modal NMT over 56% of the time. Nonetheless, human evaluators ranked translations from a multi-modal NMT model as better than those of a text-only NMT over 88% of the time, which suggests that images do help NMT in this use-case.","label_nlp4sg":0,"task":null,"method":null,"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"The ADAPT Centre for Digital Content Technology (www.adaptcentre.ie) at Dublin City University is funded under the Science Foundation Ireland Research Centres Programme (Grant 13\/RC\/2106) and is co-funded under the European Regional Development Fund.","year":2017,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} -{"ID":"wang-huang-2011-compound","url":"https:\/\/aclanthology.org\/Y11-1054","title":"Compound Event Nouns of the `Modifier-head' Type in Mandarin Chinese","abstract":"Event nouns can lexically encode eventive information. Recently these nouns have generated considerable scholarly interest. However, little research has been conducted in their morphological and syntactic structure, qualia modification, event representing feature, and information inheritance characteristics. This study has these main findings. 1) Morphologically, the modifier and the head is either free or bound morpheme. Syntactically the modifier is a nominal, adjectival, verbal or numeral morpheme, while the head is a nominal morpheme. 2) The modifier acts as a qualia role of the head. 3) All heads represent events, while the modifier is or is not an event. 4) The semantic information of a compound event noun can be inherited from the modifier or the head.","label_nlp4sg":0,"task":null,"method":null,"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2011,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} -{"ID":"georgiev-etal-2009-joint","url":"https:\/\/aclanthology.org\/W09-4503","title":"A Joint Model for Normalizing Gene and Organism Mentions in Text","abstract":"The aim of gene mention normalization is to propose an appropriate canonical name, or an identifier from a popular database, for a gene or a gene product mentioned in a given piece of text. The task has attracted a lot of research attention for several organisms under the assumption that both the mention boundaries and the target organism are known. Here we extend the task to also recognizing whether the gene mention is valid and to finding the organism it is from. We solve this extended task using a joint model for gene and organism name normalization which allows for instances from different organisms to share features, thus achieving sizable performance gains with different learning methods: Na\u00efve Bayes, Maximum Entropy, Perceptron and mira, as well as averaged versions of the last two. The evaluation results for our joint classifier show F1 score of over 97%, which proves the potential of the approach.","label_nlp4sg":1,"task":null,"method":null,"goal1":"Good Health and Well-Being","goal2":null,"goal3":null,"acknowledgments":"The work reported in this paper was partially supported by the EU FP7 project 215535 LarKC.","year":2009,"sdg1":0,"sdg2":0,"sdg3":1,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} -{"ID":"mahesh-etal-1997-flaunt","url":"https:\/\/aclanthology.org\/1997.tmi-1.1","title":"If you have it, flaunt it: using full ontological knowledge for word sense disambiguation","abstract":"Word sense disambiguation continues to be a difficult problem in natural language processing. Current methods, such as marker passing and spreading activation, for applying world knowledge in the form of selectional preferences to solve this problem do not make effective use of available knowledge. Moreover, their effectiveness decreases as the knowledge is made richer by acquiring more and more conceptual relationships. Effective resolution of word sense ambiguities requires inferring the dynamic context in processing a sentence in order to find the right selectional preferences to be applied. In this article, we propose such an inference operator and show how it finds the most specific context to resolve word sense ambiguities in the Mikrokosmos semantic analyzer. Our method retains its effectiveness even in a rich, large-scale knowledge base with a high degree of connectivity among its concepts.","label_nlp4sg":0,"task":null,"method":null,"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":1997,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} -{"ID":"kwon-etal-2020-hierarchical","url":"https:\/\/aclanthology.org\/2020.coling-main.424","title":"Hierarchical Trivia Fact Extraction from Wikipedia Articles","abstract":"Recently, automatic trivia fact extraction has attracted much research interest. Modern search engines have begun to provide trivia facts as the information for entities because they can motivate more user engagement. In this paper, we propose a new unsupervised algorithm that automatically mines trivia facts for a given entity. Unlike previous studies, the proposed algorithm targets at a single Wikipedia article and leverages its hierarchical structure via top-down processing. Thus, the proposed algorithm offers two distinctive advantages: it does not incur high computation time, and it provides a domain-independent approach for extracting trivia facts. Experimental results demonstrate that the proposed algorithm is over 100 times faster than the existing method which considers Wikipedia categories. Human evaluation demonstrates that the proposed algorithm can mine better trivia facts regardless of the target entity domain and outperforms the existing methods.","label_nlp4sg":0,"task":null,"method":null,"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2020,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} -{"ID":"rykova-werner-2019-perceptual","url":"https:\/\/aclanthology.org\/W19-6127","title":"Perceptual and acoustic analysis of voice similarities between parents and young children","abstract":"Human voice provides the means for verbal communication and forms a part of personal identity. Due to genetic and environmental factors, a voice of a child should resemble the voice of her parent(s), but voice similarities between parents and young children are underresearched. Read-aloud speech of Finnish-speaking and Russian-speaking parent-child pairs was subject to perceptual and multi-step instrumental and statistical analysis. Finnish-speaking listeners could not discriminate family pairs auditorily in an XAB paradigm, but the Russian-speaking listeners' mean accuracy of answers reached 72.5%. On average, in both language groups family-internal f0 similarities were stronger than family-external, with parents showing greater family-internal similarities than children. Auditory similarities did not reflect acoustic similarities in a straightforward way.","label_nlp4sg":0,"task":null,"method":null,"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2019,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} -{"ID":"silverman-1989-microphone","url":"https:\/\/aclanthology.org\/H89-2063","title":"A Microphone Array System for Speech Recognition","abstract":"The ultimate speech recognizer cannot use an attached or desk-mounted microphone. Array techniques offer the opportunity to free a talker from microphone incumberance. My goal is to develop algorithms and systems for this purpose.","label_nlp4sg":0,"task":null,"method":null,"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":1989,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} -{"ID":"furuse-iida-1992-cooperation","url":"https:\/\/aclanthology.org\/C92-2097","title":"Cooperation between Transfer and Analysis in Example-Based Framework","abstract":"Transfer-Driven Machine Translation (TDMT) is presented as a method which drives the translation processes according to the nature of the input. In TDMT, transfer knowledge is the central knowledge of translation, and various kinds aml levels of knowledge are cooperatively applied to input sentences. TDMT effectively utilizes an example-based framework for transfer and analysis knowledge. A consistent framework of examples makes the cooperation between transfer and analysis effective, and efficient translation is achieved. The TDMT prototype system, which translates Japanese spoken dialogs into English, has shown great promise.","label_nlp4sg":0,"task":null,"method":null,"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"1 would like to thank the members of the ATR Interpreting Telephony Research Laboratories for their comments on various parts of this research. Special thanks are due to Dr. Kohei Habara, the chairman of the board of ATR Interpreting Telephony Research Laboratories. Dr. Akira Kurematsu, the president of ATR Interpreting Telephony Research Laboratories, for their support of this research.","year":1992,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} -{"ID":"henrich-etal-2012-webcage","url":"https:\/\/aclanthology.org\/E12-1039","title":"WebCAGe -- A Web-Harvested Corpus Annotated with GermaNet Senses","abstract":"This paper describes an automatic method for creating a domain-independent senseannotated corpus harvested from the web. As a proof of concept, this method has been applied to German, a language for which sense-annotated corpora are still in short supply. The sense inventory is taken from the German wordnet GermaNet. The web-harvesting relies on an existing mapping of GermaNet to the German version of the web-based dictionary Wiktionary. The data obtained by this method constitute WebCAGe (short for: Web-Harvested Corpus Annotated with GermaNet Senses), a resource which currently represents the largest sense-annotated corpus available for German. While the present paper focuses on one particular language, the method as such is language-independent.","label_nlp4sg":0,"task":null,"method":null,"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"The research reported in this paper was jointly funded by the SFB 833 grant of the DFG and by the CLARIN-D grant of the BMBF. We would like to thank Christina Hoppermann, Marie Hinrichs as well as three anonymous EACL 2012 reviewers for their helpful comments on earlier versions of this paper. We are very grateful to Rein-hild Barkey, Sarah Schulz, and Johannes Wahle for their help with the evaluation reported in Section 5. Special thanks go to Yana Panchenko and Yannick Versley for their support with the webcrawler and to Emanuel Dima and Klaus Suttner for helping us to obtain the Gutenberg and Wikipedia texts.","year":2012,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} -{"ID":"muller-etal-2000-inducing","url":"https:\/\/aclanthology.org\/P00-1029","title":"Inducing Probabilistic Syllable Classes Using Multivariate Clustering","abstract":"An approach to automatic detection of syllable structure is presented. We demonstrate a novel application of EM-based clustering to multivariate data, exempli ed by the induction of 3-and 5-dimensional probabilistic syllable classes. The qualitative evaluation shows that the method yields phonologically meaningful syllable classes. We then propose a novel approach to grapheme-to-phoneme conversion and show that syllable structure represents valuable information for pronunciation systems.","label_nlp4sg":0,"task":null,"method":null,"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2000,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} -{"ID":"xiao-etal-2021-ernie","url":"https:\/\/aclanthology.org\/2021.naacl-main.136","title":"ERNIE-Gram: Pre-Training with Explicitly N-Gram Masked Language Modeling for Natural Language Understanding","abstract":"Coarse-grained linguistic information, such as named entities or phrases, facilitates adequately representation learning in pre-training. Previous works mainly focus on extending the objective of BERT's Masked Language Modeling (MLM) from masking individual tokens to contiguous sequences of n tokens. We argue that such contiguously masking method neglects to model the intra-dependencies and interrelation of coarse-grained linguistic information. As an alternative, we propose ERNIE-Gram, an explicitly n-gram masking method to enhance the integration of coarse-grained information into pre-training. In ERNIE-Gram, n-grams are masked and predicted directly using explicit n-gram identities rather than contiguous sequences of n tokens. Furthermore, ERNIE-Gram employs a generator model to sample plausible n-gram identities as optional n-gram masks and predict them in both coarsegrained and fine-grained manners to enable comprehensive n-gram prediction and relation modeling. We pre-train ERNIE-Gram on English and Chinese text corpora and finetune on 19 downstream tasks. Experimental results show that ERNIE-Gram outperforms previous pre-training models like XLNet and RoBERTa by a large margin, and achieves comparable results with state-of-the-art methods. The source codes and pre-trained models have been released at https:\/\/github. com\/PaddlePaddle\/ERNIE.","label_nlp4sg":0,"task":null,"method":null,"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"We would like to thank Zhen Li for his constructive suggestions, and hope everything goes well with his work. We are also indebted to the NAACL-HLT reviewers for their detailed and insightful comments on our work.","year":2021,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} -{"ID":"weller-di-marco-2017-simple","url":"https:\/\/aclanthology.org\/W17-1722","title":"Simple Compound Splitting for German","abstract":"This paper presents a simple method for German compound splitting that combines a basic frequency-based approach with a form-to-lemma mapping to approximate morphological operations. With the exception of a small set of hand-crafted rules for modeling transitional elements, our approach is resource-poor. In our evaluation, the simple splitter outperforms a splitter relying on rich morphological resources.","label_nlp4sg":0,"task":null,"method":null,"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"This project has received funding from the Euro- ","year":2017,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} -{"ID":"chen-etal-2013-human","url":"https:\/\/aclanthology.org\/I13-1182","title":"Human-Computer Interactive Chinese Word Segmentation: An Adaptive Dirichlet Process Mixture Model Approach","abstract":"Previous research shows that Kalman filter based human-computer interactive Chinese word segmentation achieves an encouraging effect in reducing user interventions, but suffers from the drawback of incompetence in distinguishing segmentation ambiguities. This paper proposes a novel approach to handle this problem by using an adaptive Dirichlet process mixture model. By adjusting the hyperparameters of the model, ideal classifiers can be generated to conform to the interventions provided by the users. Experiments reveal that our approach achieves a notable improvement in handling segmentation ambiguities. With knowledge learnt from users, our model outperforms the baseline Kalman filter model by about 0.5% in segmenting homogeneous texts.","label_nlp4sg":0,"task":null,"method":null,"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"We would like to thank Professor Sujian Li for her valuable advice on writing this paper. This work is partially supported by Open Project Program of the National Laboratory of Pattern Recognition (NLPR) and the Opening Project of Beijing Key Laboratory of Internet Culture and Digital Dissemination Research (ICDD201102).","year":2013,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} -{"ID":"bondale-sreenivas-2012-emotiphons","url":"https:\/\/aclanthology.org\/W12-5308","title":"Emotiphons: Emotion Markers in Conversational Speech - Comparison across Indian Languages","abstract":"In spontaneous speech, emotion information is embedded at several levels: acoustic, linguistic, gestural (non-verbal), etc. For emotion recognition in speech, there is much attention to acoustic level and some attention at the linguistic level. In this study, we identify paralinguistic markers for emotion in the language. We study two Indian languages belonging to two distinct language families. We consider Marathi from Indo-Aryan and Kannada from Dravidian family. We show that there exist large numbers of specific paralinguistic emotion markers in these languages, referred to as emotiphons. They are intertwined with prosody and semantics. Preprocessing of speech signal with respect to emotiphons would facilitate emotion recognition in speech for Indian languages. Some of them are common between the two languages, indicating cultural influence in language usage.","label_nlp4sg":0,"task":null,"method":null,"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2012,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} -{"ID":"simard-1999-text","url":"https:\/\/aclanthology.org\/W99-0602","title":"Text-Translation Alignment: Three Languages Are Better Than Two","abstract":"In this article, we show how a bilingual texttranslation alignment method can be adapted to deal with more than two versions of a text. Experiments on a trilingual corpus demonstrate that this method yields better bilingual alignments than can be obtained with bilingual textalignment methods. Moreover, for a given number of texts, the computational complexity of the multilingual method is the same as for bilingual alignment.","label_nlp4sg":0,"task":null,"method":null,"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"Many of the ideas expressed here emerged from informal exchanges with Fathi Debili and Pierre Isabelle; I am greatly indebted to both for their constant support throughout this project. I also wish to thank the anonymous reviewers for their constructive comments on the paper. ","year":1999,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} -{"ID":"huang-etal-2020-texthide","url":"https:\/\/aclanthology.org\/2020.findings-emnlp.123","title":"TextHide: Tackling Data Privacy in Language Understanding Tasks","abstract":"An unsolved challenge in distributed or federated learning is to effectively mitigate privacy risks without slowing down training or reducing accuracy. In this paper, we propose Tex-tHide aiming at addressing this challenge for natural language understanding tasks. It requires all participants to add a simple encryption step to prevent an eavesdropping attacker from recovering private text data. Such an encryption step is efficient and only affects the task performance slightly. In addition, Tex-tHide fits well with the popular framework of fine-tuning pre-trained language models (e.g., BERT) for any sentence or sentence-pair task. We evaluate TextHide on the GLUE benchmark, and our experiments show that TextHide can effectively defend attacks on shared gradients or representations and the averaged accuracy reduction is only 1.9%. We also present an analysis of the security of TextHide using a conjecture about the computational intractability of a mathematical problem. 1","label_nlp4sg":1,"task":null,"method":null,"goal1":"Peace, Justice and Strong Institutions","goal2":null,"goal3":null,"acknowledgments":"This project is supported in part by the Graduate Fellowship at Princeton University, Ma Huateng Foundation, Schmidt Foundation, Simons Foundation, NSF, DARPA\/SRC, Google and Amazon AWS. Arora and Song were at the Institute for Advanced Study during this research.","year":2020,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":1,"sdg17":0} -{"ID":"nn-1960-questions-discussion-10","url":"https:\/\/aclanthology.org\/1960.earlymt-nsmt.58","title":"Questions and Discussion 10","abstract":"The statement was implied that, with the aid of compilers, a linguist who did not know the machine would be able to sit down and write his program in such a way that he would have a successful running program.\nOur experience with automatic programming in the area of scientific programming seems to indicate that the man has to know the machine, otherwise he is going to get himself into a lot of trouble.","label_nlp4sg":0,"task":null,"method":null,"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":1960,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} -{"ID":"li-etal-2020-mapping","url":"https:\/\/aclanthology.org\/2020.acl-main.729","title":"Mapping Natural Language Instructions to Mobile UI Action Sequences","abstract":"We present a new problem: grounding natural language instructions to mobile user interface actions, and create three new datasets for it. For full task evaluation, we create PIX-ELHELP, a corpus that pairs English instructions with actions performed by people on a mobile UI emulator. To scale training, we decouple the language and action data by (a) annotating action phrase spans in HowTo instructions and (b) synthesizing grounded descriptions of actions for mobile user interfaces. We use a Transformer to extract action phrase tuples from long-range natural language instructions. A grounding Transformer then contextually represents UI objects using both their content and screen position and connects them to object descriptions. Given a starting screen and instruction, our model achieves 70.59% accuracy on predicting complete ground-truth action sequences in PIXELHELP.","label_nlp4sg":0,"task":null,"method":null,"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"We would like to thank our anonymous reviewers for their insightful comments that improved the paper. Many thanks to the Google Data Compute team, especially Ashwin Kakarla and Muqthar Mohammad for their help with the annotations, and Song Wang, Justin Cui and Christina Ou for their help on early data preprocessing.","year":2020,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} -{"ID":"meng-etal-2021-empirical","url":"https:\/\/aclanthology.org\/2021.naacl-main.396","title":"An Empirical Study on Neural Keyphrase Generation","abstract":"Recent years have seen a flourishing of neural keyphrase generation (KPG) works, including the release of several large-scale datasets and a host of new models to tackle them. Model performance on KPG tasks has increased significantly with evolving deep learning research. However, there lacks a comprehensive comparison among different model designs, and a thorough investigation on related factors that may affect a KPG system's generalization performance. In this empirical study, we aim to fill this gap by providing extensive experimental results and analyzing the most crucial factors impacting the generalizability of KPG models. We hope this study can help clarify some of the uncertainties surrounding the KPG task and facilitate future research on this topic.","label_nlp4sg":0,"task":null,"method":null,"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"RM was supported by the Amazon Research Awards for the project \"Transferable, Controllable, Applicable Keyphrase Generation\". This research was partially supported by the University of Pittsburgh Center for Research Computing through the resources provided. The authors thank the anonymous NAACL reviewers for their helpful feedback and suggestions.","year":2021,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} -{"ID":"chae-2013-myths","url":"https:\/\/aclanthology.org\/Y13-1054","title":"Myths in Korean Morphology and Their Computational Implications","abstract":"This paper examines some popular misanalyses in Korean morphology. For example, contrary to popular myth, the verbal ha-and the element-(nu)n-cannot be analyzed as a derivational affix and as a present tense marker, respectively. We will see that ha-is an independent word and that-(nu)n-is part of a portmanteau morph. In providing reasonable analyses of them, we will consider some computational implications of the misanalyses. It is really mysterious that such wrong analyses can become so popular in a scientific field of linguistics.","label_nlp4sg":0,"task":null,"method":null,"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"We are thankful to the anonymous reviewers, whose valuable comments have been very helpful in improving the quality of this paper. This work was supported by a 2013 research grant from Hankuk University of Foreign Studies.","year":2013,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} -{"ID":"volk-2006-bad","url":"https:\/\/aclanthology.org\/W06-2112","title":"How Bad is the Problem of PP-Attachment? A Comparison of English, German and Swedish","abstract":"The correct attachment of prepositional phrases (PPs) is a central disambiguation problem in parsing natural languages. This paper compares the baseline situation in English, German and Swedish based on manual PP attachments in various treebanks for these languages. We argue that cross-language comparisons of the disambiguation results in previous research is impossible because of the different selection procedures when building the training and test sets. We perform uniform treebank queries and show that English has the highest noun attachment rate followed by Swedish and German. We also show that the high rate in English is dominated by the preposition of. From our study we derive a list of criteria for profiling data sets for PP attachment experiments.","label_nlp4sg":0,"task":null,"method":null,"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2006,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} -{"ID":"zosa-granroth-wilding-2019-multilingual","url":"https:\/\/aclanthology.org\/R19-1159","title":"Multilingual Dynamic Topic Model","abstract":"Dynamic topic models (DTMs) capture the evolution of topics and trends in time series data. Current DTMs are applicable only to monolingual datasets. In this paper we present the multilingual dynamic topic model (ML-DTM), a novel topic model that combines DTM with an existing multilingual topic modeling method to capture crosslingual topics that evolve across time. We present results of this model on a parallel German-English corpus of news articles and a comparable corpus of Finnish and Swedish news articles. We demonstrate the capability of ML-DTM to track significant events related to a topic and show that it finds distinct topics and performs as well as existing multilingual topic models in aligning cross-lingual topics.","label_nlp4sg":0,"task":null,"method":null,"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"This work has been supported by the European Union's Horizon 2020 research and innovation programme under grants 770299 (NewsEye) and 825153 (EMBEDDIA).","year":2019,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} -{"ID":"milewski-etal-2020-scene","url":"https:\/\/aclanthology.org\/2020.aacl-main.50","title":"Are Scene Graphs Good Enough to Improve Image Captioning?","abstract":"Many top-performing image captioning models rely solely on object features computed with an object detection model to generate image descriptions. However, recent studies propose to directly use scene graphs to introduce information about object relations into captioning, hoping to better describe interactions between objects. In this work, we thoroughly investigate the use of scene graphs in image captioning. We empirically study whether using additional scene graph encoders can lead to better image descriptions and propose a conditional graph attention network (C-GAT), where the image captioning decoder state is used to condition the graph updates. Finally, we determine to what extent noise in the predicted scene graphs influence caption quality. Overall, we find no significant difference between models that use scene graph features and models that only use object detection features across different captioning metrics, which suggests that existing scene graph generation models are still too noisy to be useful in image captioning. Moreover, although the quality of predicted scene graphs is very low in general, when using high quality scene graphs we obtain gains of up to 3.3 CIDEr compared to a strong Bottom-Up Top-Down baseline. 1","label_nlp4sg":0,"task":null,"method":null,"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"We would like to thank the COST Action CA18231 for funding a research visit to collaborate on this project. This work is funded by the European Research Council (ERC) under the ERC Advanced Grant 788506. IC has received funding from the European Union's Horizon 2020 research and innovation program under the Marie Sk\u0142odowska-Curie grant agreement No 838188.","year":2020,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} -{"ID":"declerck-etal-2018-integrated","url":"https:\/\/aclanthology.org\/L18-1094","title":"An Integrated Formal Representation for Terminological and Lexical Data included in Classification Schemes","abstract":"This paper presents our work dealing with a potential application in e-lexicography: the automatized creation of specialized multilingual dictionaries from structured data, which are available in the form of comparable multilingual classification schemes or taxonomies. As starting examples, we use comparable industry classification schemes, which frequently occur in the context of stock exchanges and business reports. Initially, we planned to follow an approach based on cross-taxonomies and cross-languages string mapping to automatically detect candidate multilingual dictionary entries for this specific domain. However, the need to first transform the comparable classification schemes into a shared formal representation language in order to be able to properly align their components before implementing the algorithms for the multilingual lexicon extraction soon became apparent. We opted for the SKOS-XL vocabulary for modelling the multilingual terminological part of the comparable taxonomies and for OntoLex-Lemon for modelling the multilingual lexical entries which can be extracted from the original data. In this paper, we present the suggested modelling architecture, which demonstrates how terminological elements and lexical items can be formally integrated and explicitly cross-linked in the context of the Linguistic Linked Open Data (LLOD).","label_nlp4sg":0,"task":null,"method":null,"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2018,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} -{"ID":"vincze-2013-weasels","url":"https:\/\/aclanthology.org\/I13-1044","title":"Weasels, Hedges and Peacocks: Discourse-level Uncertainty in Wikipedia Articles","abstract":"Uncertainty is an important linguistic phenomenon that is relevant in many areas of language processing. While earlier research mostly concentrated on the semantic aspects of uncertainty, here we focus on discourse-and pragmaticsrelated aspects of uncertainty. We present a classification of such linguistic phenomena and introduce a corpus of Wikipedia articles in which the presented types of discourse-level uncertainty-weasel, hedge and peacock-have been manually annotated. We also discuss some experimental results on discourse-level uncertainty detection.","label_nlp4sg":0,"task":null,"method":null,"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"This work was supported in part by the European Union and the European Social Fund through the project FuturICT.hu (grant no.: T\u00c1MOP-4.2.2.C-11\/1\/KONV-2012-0013).","year":2013,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} -{"ID":"church-1988-stochastic","url":"https:\/\/aclanthology.org\/A88-1019","title":"A Stochastic Parts Program and Noun Phrase Parser for Unrestricted Text","abstract":"It is well-known that part of speech depends on context. The word \"table,\" for example, can be a verb in some contexts (e.g., \"He will table the motion\") and a noun in others (e.g., \"The table is ready\"). A program has been written which tags each word in an input sentence with the most likely part of speech. The program produces the following output for the two \"table\" sentences just mentioned:\n\u2022 He\/PPS will\/lVlD table\/VB the\/AT motion\/NN .\/.","label_nlp4sg":0,"task":null,"method":null,"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":1988,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} -{"ID":"marimon-etal-2017-annotation","url":"https:\/\/aclanthology.org\/W17-1807","title":"Annotation of negation in the IULA Spanish Clinical Record Corpus","abstract":"This paper presents the IULA Spanish Clinical Record Corpus, a corpus of 3,194 sentences extracted from anonymized clinical records and manually annotated with negation markers and their scope. The corpus was conceived as a resource to support clinical text-mining systems, but it is also a useful resource for other Natural Language Processing systems handling clinical texts: automatic encoding of clinical records, diagnosis support, term extraction, among others, as well as for the study of clinical texts. The corpus is publicly available with a CC-BY-SA 3.0 license.","label_nlp4sg":1,"task":null,"method":null,"goal1":"Good Health and Well-Being","goal2":null,"goal3":null,"acknowledgments":"We want to acknowledge the support of Dra. Pilar Bel-Rafecas, clinician, and the comments and suggestions of the two anonymous reviewers that have contributed to improve the final version of this paper. This work was partially supported by the project TUNER (TIN2015-65308-C5-1-R, MINECO\/FEDER)","year":2017,"sdg1":0,"sdg2":0,"sdg3":1,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} -{"ID":"giovanni-moller-etal-2020-nlp","url":"https:\/\/aclanthology.org\/2020.wnut-1.44","title":"NLP North at WNUT-2020 Task 2: Pre-training versus Ensembling for Detection of Informative COVID-19 English Tweets","abstract":"With the COVID-19 pandemic raging worldwide since the beginning of the 2020 decade, the need for monitoring systems to track relevant information on social media is vitally important. This paper describes our submission to the WNUT-2020 Task 2: Identification of informative COVID-19 English Tweets. We investigate the effectiveness for a variety of classification models, and found that domainspecific pre-trained BERT models lead to the best performance. On top of this, we attempt a variety of ensembling strategies, but these attempts did not lead to further improvements. Our final best model, the standalone CT-BERT model, proved to be highly competitive, leading to a shared first place in the shared task. Our results emphasize the importance of domain and task-related pre-training. 1","label_nlp4sg":1,"task":null,"method":null,"goal1":"Good Health and Well-Being","goal2":null,"goal3":null,"acknowledgments":"We would like to thank the organizers for this shared task. Part of this research is supported by a grant from Danmarks Frie Forskningsfond (9063-00077B).","year":2020,"sdg1":0,"sdg2":0,"sdg3":1,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} -{"ID":"lui-cook-2013-classifying","url":"https:\/\/aclanthology.org\/U13-1003","title":"Classifying English Documents by National Dialect","abstract":"We investigate national dialect identification, the task of classifying English documents according to their country of origin. We use corpora of known national origin as a proxy for national dialect. In order to identify general (as opposed to corpus-specific) characteristics of national dialects of English, we make use of a variety of corpora of different sources, with inter-corpus variation in length, topic and register. The central intuition is that features that are predictive of national origin across different data sources are features that characterize a national dialect. We examine a number of classification approaches motivated by different areas of research, and evaluate the performance of each method across 3 national dialects: Australian, British, and Canadian English. Our results demonstrate that there are lexical and syntactic characteristics of each national dialect that are consistent across data sources.","label_nlp4sg":0,"task":null,"method":null,"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2013,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} -{"ID":"hassan-etal-2008-tracking","url":"https:\/\/aclanthology.org\/C08-1040","title":"Tracking the Dynamic Evolution of Participants Salience in a Discussion","abstract":"We introduce a technique for analyzing the temporal evolution of the salience of participants in a discussion. Our method can dynamically track how the relative importance of speakers evolve over time using graph based techniques. Speaker salience is computed based on the eigenvector centrality in a graph representation of participants in a discussion. Two participants in a discussion are linked with an edge if they use similar rhetoric. The method is dynamic in the sense that the graph evolves over time to capture the evolution inherent to the participants salience. We used our method to track the salience of members of the US Senate using data from the US Congressional Record. Our analysis investigated how the salience of speakers changes over time. Our results show that the scores can capture speaker centrality in topics as well as events that result in change of salience or influence among different participants.","label_nlp4sg":1,"task":null,"method":null,"goal1":"Partnership for the goals","goal2":null,"goal3":null,"acknowledgments":"This paper is based upon work supported by the National Science Foundation under Grant No. 0527513, \"DHB: The dynamics of Political Representation and Political Rhetoric\". Any opinions, findings, and conclusions or recommendations expressed in this paper are those of the authors and do not necessarily reflect the views of the National Science Foundation.","year":2008,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":1} -{"ID":"swanson-etal-2013-context","url":"https:\/\/aclanthology.org\/P13-1030","title":"A Context Free TAG Variant","abstract":"We propose a new variant of Tree-Adjoining Grammar that allows adjunction of full wrapping trees but still bears only context-free expressivity. We provide a transformation to context-free form, and a further reduction in probabilistic model size through factorization and pooling of parameters. This collapsed context-free form is used to implement efficient grammar estimation and parsing algorithms. We perform parsing experiments the Penn Treebank and draw comparisons to Tree-Substitution Grammars and between different variations in probabilistic model design. Examination of the most probable derivations reveals examples of the linguistically relevant structure that our variant makes possible.","label_nlp4sg":0,"task":null,"method":null,"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2013,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} -{"ID":"kim-etal-2019-qe","url":"https:\/\/aclanthology.org\/W19-5407","title":"QE BERT: Bilingual BERT Using Multi-task Learning for Neural Quality Estimation","abstract":"For translation quality estimation at word and sentence levels, this paper presents a novel approach based on BERT that recently has achieved impressive results on various natural language processing tasks. Our proposed model is re-purposed BERT for the translation quality estimation and uses multi-task learning for the sentence-level task and word-level subtasks (i.e., source word, target word, and target gap). Experimental results on Quality Estimation shared task of WMT19 show that our systems show competitive results and provide significant improvements over the baseline.","label_nlp4sg":0,"task":null,"method":null,"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2019,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} -{"ID":"lyu-etal-1998-large","url":"https:\/\/aclanthology.org\/O98-1006","title":"A Large-Vocabulary Taiwanese (Min-nan) Speech Recognition System Based on Inter-syllabic Initial-Final Modeling and Lexicon-Tree Search","abstract":null,"label_nlp4sg":0,"task":null,"method":null,"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":1998,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} -{"ID":"yang-etal-2020-ggp","url":"https:\/\/aclanthology.org\/2020.lrec-1.581","title":"GGP: Glossary Guided Post-processing for Word Embedding Learning","abstract":"Word embedding learning is the task to map each word into a low-dimensional and continuous vector based on a large corpus. To enhance corpus based word embedding models, researchers utilize domain knowledge to learn more distinguishable representations via joint optimization and post-processing based models. However, joint optimization based models require much training time. Existing post-processing models mostly consider semantic knowledge so that learned embedding models show less functional information. Compared with semantic knowledge sources, glossary is a comprehensive linguistic resource which contains complete semantics. Previous glossary based post-processing method only processed words occurred in the glossary, and did not distinguish multiple senses of each word. In this paper, to make better use of glossary, we utilize attention mechanism to integrate multiple sense representations which are learned respectively. With measuring similarity between word representation and combined sense representation, we aim to capture more topical and functional information. We propose GGP (Glossary Guided Post-processing word embedding) model which consists of a global post-processing function to fine-tune each word vector, and an auto-encoding model to learn sense representations, furthermore, constrains each post-processed word representation and the composition of its sense representations to be similar. We evaluate our model by comparing it with two state-of-the-art models on six word topical\/functional similarity datasets, and the results show that it outperforms competitors by an average of 4.1% across all datasets. And our model outperforms GloVe by more than 7%.","label_nlp4sg":0,"task":null,"method":null,"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"The work was supported by PolyU Teaching Development with project code 1.61.xx.9A5V and Hong Kong Collaborative Research Fund with project code C5026-18G.","year":2020,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} -{"ID":"ebrahimi-saniee-abadeh-2012-new","url":"https:\/\/aclanthology.org\/W12-4101","title":"A New Parametric Estimation Method for Graph-based Clustering","abstract":"Relational clustering has received much attention from researchers in the last decade. In this paper we present a parametric method that employs a combination of both hard and soft clustering. Based on the corresponding Markov chain of an affinity matrix, we simulate a probability distribution on the states by defining a conditional probability for each subpopulation of states. This probabilistic model would enable us to use expectation maximization for parameter estimation. The effectiveness of the proposed approach is demonstrated on several real datasets against spectral clustering methods.","label_nlp4sg":0,"task":null,"method":null,"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2012,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} -{"ID":"maheshwari-etal-2021-scibert","url":"https:\/\/aclanthology.org\/2021.sdp-1.17","title":"SciBERT Sentence Representation for Citation Context Classification","abstract":"This paper describes our system (IREL) for 3C-Citation Context Classification shared task of the Scholarly Document Processing Workshop at NAACL 2021 (Suchetha N Kunnath and Knoth, 2021). We participated in both subtask A and subtask B. Our best system achieved a Macro F1 score of 0.26973 on the private leaderboard for subtask A and was ranked one. For subtask B our best system achieved a Macro F1 score of 0.59071 on the private leaderboard and was ranked two. We used similar models for both the subtasks with some minor changes, as discussed in this paper. Our best performing model for both the subtask was a finetuned SciBert model followed by a linear layer. We provide a detailed description of all the approaches we tried and their results. The code can be found https:\/\/github.com\/ bhavyajeet\/3c-citation_text_ classification","label_nlp4sg":1,"task":null,"method":null,"goal1":"Industry, Innovation and Infrastructure","goal2":null,"goal3":null,"acknowledgments":null,"year":2021,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":1,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} -{"ID":"chen-etal-2020-listeners","url":"https:\/\/aclanthology.org\/2020.inlg-1.26","title":"Listener's Social Identity Matters in Personalised Response Generation","abstract":"Personalised response generation enables generating human-like responses by means of assigning the generator a social identity. However, pragmatics theory suggests that human beings adjust the way of speaking based on not only who they are but also whom they are talking to. In other words, when modelling personalised dialogues, it might be favourable if we also take the listener's social identity into consideration. To validate this idea, we use gender as a typical example of a social variable to investigate how the listener's identity influences the language used in Chinese dialogues on social media. Also, we build personalised generators. The experiment results demonstrate that the listener's identity indeed matters in the language use of responses and that the response generator can capture such differences in language use. More interestingly, by additionally modelling the listener's identity, the personalised response generator performs better in its own identity.","label_nlp4sg":0,"task":null,"method":null,"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"We thank the anonymous reviewers for their helpful comments. Guanyi Chen is supported by China Scholarship Council (No.201907720022).","year":2020,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} -{"ID":"choukri-etal-2004-network","url":"http:\/\/www.lrec-conf.org\/proceedings\/lrec2004\/pdf\/797.pdf","title":"Network of Data Centres (NetDC): BNSC - An Arabic Broadcast News Speech Corpus","abstract":"Broadcast news is a very rich source of Language Resources that has been exploited to develop and assess a large set of Human Language Technologies. Some examples include systems to: automatically produce text transcriptions of spoken data; identify the language of a text; translate a text from one language to another; identify topics in the news and retrieve all stories discussing a target topic; retrieve stories directly from the broadcast audio and extract summaries of the content of news stories. BNSC is a broadcast news speech corpus developed in the framework of the European-funded project Network of Data Centres (NetDC). The corpus contains more than 20 hours of Arabic news recordings in modern standard Arabic. The news was recorded over a period of 3 months and were transcribed in Arabic script. The project was done in corporation with the LDC (Linguistic Data Consortium), which has produced a similar corpus of its Voice of America Arabic in the United States. This paper presents the BNSC corpus production from data collection to final product.","label_nlp4sg":0,"task":null,"method":null,"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2004,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} -{"ID":"estival-etal-2014-austalk","url":"http:\/\/www.lrec-conf.org\/proceedings\/lrec2014\/pdf\/520_Paper.pdf","title":"AusTalk: an audio-visual corpus of Australian English","abstract":"This paper describes the AusTalk corpus, which was designed and created through the Big ASC, a collaborative project with the two main goals of providing a standardised infrastructure for audiovisual recordings in Australia and of producing a large audiovisual corpus of Australian English, with 3 hours of AV recordings for 1000 speakers. We first present the overall project, then describe the corpus itself and its components, the strict data collection protocol with high levels of standardisation and automation, and the processes put in place for quality control. We also discuss the annotation phase of the project, along with its goals and challenges; a major contribution of the project has been to explore procedures for automating annotations and we present our solutions. We conclude with the current status of the corpus and with some examples of research already conducted with this new resource. AusTalk is one of the corpora included in the Alveo Virtual Lab, which is briefly sketched in the conclusion.","label_nlp4sg":0,"task":null,"method":null,"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"We gratefully acknowledge financial and\/or in-kind assistance of the Australian Research Council (LE100100211), ASSTA; the Universities of Western Sydney, Canberra, Melbourne, NSW, Queensland, Sydney, Tasmania and Western Australia; Macquarie, Australian National, and Flinders Universities; and the Max Planck Institute for Psycholinguistics, Nijmegen.","year":2014,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} -{"ID":"lu-2007-hybrid","url":"https:\/\/aclanthology.org\/N07-1024","title":"Hybrid Models for Semantic Classification of Chinese Unknown Words","abstract":"This paper addresses the problem of classifying Chinese unknown words into fine-grained semantic categories defined in a Chinese thesaurus. We describe three novel knowledge-based models that capture the relationship between the semantic categories of an unknown word and those of its component characters in three different ways. We then combine two of the knowledge-based models with a corpus-based model which classifies unknown words using contextual information. Experiments show that the knowledge-based models outperform previous methods on the same task, but the use of contextual information does not further improve performance.","label_nlp4sg":0,"task":null,"method":null,"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2007,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} -{"ID":"smadja-etal-1996-translating","url":"https:\/\/aclanthology.org\/J96-1001","title":"Translating Collocations for Bilingual Lexicons: A Statistical Approach","abstract":"Collocations are notoriously difficult for non-native speakers to translate, primarily because they are opaque and cannot be translated on a word-byword basis. We describe a program named Champollion which, given a pair of parallel corpora in two different languages and a list of collocations in one of them, automatically produces their translations. Our goal is to provide a tool for compiling bilingual lexical information above the word level in multiple languages,for different domains. The algorithm we use is based on statistical methods and produces p-word translations of n-word collocations in which n and p need not be the same. For example, Champollion translates make ... decision, employment equity, and stock market into prendre ... d6cision, 6quit6 en mati6re d'emploi, and bourse respectively. Testing Champollion on three years' worth of the Hansards corpus yielded the French translations of 300 collocations for each year, evaluated at 73% accuracy on average. In this paper, we describe the statistical measures used, the algorithm, and the implementation of Champollion, presenting our results and evaluation.","label_nlp4sg":0,"task":null,"method":null,"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"This work was supported jointly by the Advanced Research Projects Agency and the Office of Naval Research under grant N00014-89-J-1782, by the Office of Naval Research under grant N00014-95-1-0745, by the National Science Foundation under grant GER-90-24069, and by the New York State Science and Technology Foundation under grants NYSSTF-CAT(91)-053 and NYSSTF-CAT(94)-013. We wish to thank Pascale Fung and Dragomir Radev for serving as evaluators, Thanasis Tsantilas for discussions relating to the average-case complexity of Champollion, and the anonymous reviewers for providing useful comments on an earlier version of the paper. We also thank Ofer Wainberg for his excellent work on improving the efficiency of Champollion and for adding the preposition extension, and Ken Church and AT&T Bell Laboratories for providing us with a prealigned Hansards corpus.","year":1996,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} -{"ID":"veselovska-hajic-jr-2013-words","url":"https:\/\/aclanthology.org\/W13-4101","title":"Why Words Alone Are Not Enough: Error Analysis of Lexicon-based Polarity Classifier for Czech","abstract":"Lexicon-based classifier is in the long term one of the main and most effective methods of polarity classification used in sentiment analysis, i.e. computational study of opinions, sentiments and emotions expressed in text (see Liu, 2010). Although it achieves relatively good results also for Czech, the classifier still shows some error rate. This paper provides a detailed analysis of such errors caused both by the system and by human reviewers. The identified errors are representatives of the challenges faced by the entire area of opinion mining. Therefore, the analysis is essential for further research in the field and serves as a basis for meaningful improvements of the system.","label_nlp4sg":0,"task":null,"method":null,"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2013,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} -{"ID":"van-noord-bouma-2009-parsed","url":"https:\/\/aclanthology.org\/W09-0107","title":"Parsed Corpora for Linguistics","abstract":"Knowledge-based parsers are now accurate, fast and robust enough to be used to obtain syntactic annotations for very large corpora fully automatically. We argue that such parsed corpora are an interesting new resource for linguists. The argument is illustrated by means of a number of recent results which were established with the help of parsed corpora.","label_nlp4sg":0,"task":null,"method":null,"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"This research was carried out in part in the context of the STEVIN programme which is funded by the Dutch and Flemish governments","year":2009,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} -{"ID":"biatov-kohler-2002-methods","url":"http:\/\/www.lrec-conf.org\/proceedings\/lrec2002\/pdf\/176.pdf","title":"Methods and Tools for Speech Data Acquisition exploiting a Database of German Parliamentary Speeches and Transcripts from the Internet","abstract":"This paper describes methods that exploit stenographic transcripts of the German parliament to improve the acoustic models of a speech recognition system for this domain. The stenographic transcripts and the speech data are available on the Internet. Using data from the Internet makes it possible to avoid the costly process of the collection and annotation of a huge amount of data. The automatic data acquisition technique works using the stenographic transcripts and acoustic data from the German parliamentary speeches plus general acoustic models, trained on different data. The idea of this technique is to generate special finite state automata from the stenographic transcripts. These finite state automata simulate potential possible correspondences between the stenographic transcript and the spoken audio content, i.e. accurate transcript. The first step is the recognition of the speech data using finite state automaton as a language model. The next step is to find, to extract and to verify the match between sections of recognized words and actually spoken audio content. After this, the automatically extracted and verified data can be used for acoustic model training. Experiments show that for a given recognition task from the German Parliament domain the absolute decrease of the word error rate is 20%.","label_nlp4sg":1,"task":null,"method":null,"goal1":"Peace, Justice and Strong Institutions","goal2":null,"goal3":null,"acknowledgments":"This work was funded by the German Federal Ministry for Research and Education.","year":2002,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":1,"sdg17":0} -{"ID":"hatori-suzuki-2011-japanese","url":"https:\/\/aclanthology.org\/I11-1014","title":"Japanese Pronunciation Prediction as Phrasal Statistical Machine Translation","abstract":"This paper addresses the problem of predicting the pronunciation of Japanese text. The difficulty of this task lies in the high degree of ambiguity in the pronunciation of Japanese characters and words. Previous approaches have either considered the task as a word-level classification problem based on a dictionary, which does not fare well in handling out-of-vocabulary (OOV) words; or solely focused on the pronunciation prediction of OOV words without considering the contextual disambiguation of word pronunciations in text. In this paper, we propose a unified approach within the framework of phrasal statistical machine translation (SMT) that combines the strengths of the dictionary-based and substring-based approaches. Our approach is novel in that we combine wordand character-based pronunciations from a dictionary within an SMT framework: the former captures the idiosyncratic properties of word pronunciation, while the latter provides the flexibility to predict the pronunciation of OOV words. We show that based on an extensive evaluation on various test sets, our model significantly outperforms the previous state-of-the-art systems, achieving around 90% accuracy in most domains.","label_nlp4sg":0,"task":null,"method":null,"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"We are grateful to Graham Neubig for providing us with detailed information on KyTea, and to anonymous reviewers for useful comments.","year":2011,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} -{"ID":"cai-yates-2013-semantic","url":"https:\/\/aclanthology.org\/S13-1045","title":"Semantic Parsing Freebase: Towards Open-domain Semantic Parsing","abstract":"Existing semantic parsing research has steadily improved accuracy on a few domains and their corresponding databases. This paper introduces FreeParser, a system that trains on one domain and one set of predicate and constant symbols, and then can parse sentences for any new domain, including sentences that refer to symbols never seen during training. FreeParser uses a domain-independent architecture to automatically identify sentences relevant to each new database symbol, which it uses to supplement its manually-annotated training data from the training domain. In cross-domain experiments involving 23 domains, FreeParser can parse sentences for which it has seen comparable unannotated sentences with an F1 of 0.71.","label_nlp4sg":0,"task":null,"method":null,"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"This material is based upon work supported by the National Science Foundation under Grant No. IIS-1218692. We wish to thank Sophia Kohlhaas and Ragine Williams for providing data for the project.","year":2013,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} -{"ID":"uzan-hacohen-kerner-2020-jct","url":"https:\/\/aclanthology.org\/2020.semeval-1.266","title":"JCT at SemEval-2020 Task 12: Offensive Language Detection in Tweets Using Preprocessing Methods, Character and Word N-grams","abstract":"In this paper, we describe our submissions to SemEval-2020 contest. We tackled subtask 12-\"Multilingual Offensive Language Identification in Social Media\". We developed different models for four languages: Arabic, Danish, Greek, and Turkish. We applied three supervised machine learning methods using various combinations of character and word n-gram features. In addition, we applied various combinations of basic preprocessing methods. Our best submission was a model we built for offensive language identification in Danish using Random Forest. This model was ranked at the 6 th position out of 39 submissions. Our result is lower by only 0.0025 than the result of the team that won the 4 th place using entirely non-neural methods. Our experiments indicate that char ngram features are more helpful than word ngram features. This phenomenon probably occurs because tweets are more characterized by characters than by words, tweets are short, and contain various special sequences of characters, e.g., hashtags, shortcuts, slang words, and typos.","label_nlp4sg":1,"task":null,"method":null,"goal1":"Peace, Justice and Strong Institutions","goal2":null,"goal3":null,"acknowledgments":null,"year":2020,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":1,"sdg17":0} -{"ID":"zhang-etal-2022-niutranss","url":"https:\/\/aclanthology.org\/2022.iwslt-1.19","title":"The NiuTrans's Submission to the IWSLT22 English-to-Chinese Offline Speech Translation Task","abstract":"This paper describes NiuTrans's submission to the IWSLT22 English-to-Chinese (En-Zh) offline speech translation task. The end-to-end and bilingual system is built by constrained English and Chinese data and translates the English speech to Chinese text without intermediate transcription. Our speech translation models are composed of different pre-trained acoustic models and machine translation models by two kinds of adapters. We compared the effect of the standard speech feature (e.g. log Mel-filterbank) and the pre-training speech feature and try to make them interact. The final submission is an ensemble of three potential speech translation models. Our single best and ensemble model achieves 18.66 BLEU and 19.35 BLEU separately on MuST-C En-Zh tst-COMMON set.","label_nlp4sg":0,"task":null,"method":null,"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"This work was supported in part by the National Science Foundation of China (Nos. 61732005 and 61876035), the China HTRD Center Project (No. 2020AAA0107904) and Yunnan Provincial Major Science and Technology Special Plan Projects (Nos. 201902D08001905 and 202103AA080015). The authors would like to thank anonymous reviewers for their valuable comments. Thank Hao Chen and Jie Wang for processing the data.","year":2022,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} -{"ID":"toral-way-2015-translating","url":"https:\/\/aclanthology.org\/W15-0714","title":"Translating Literary Text between Related Languages using SMT","abstract":"We explore the feasibility of applying machine translation (MT) to the translation of literary texts. To that end, we measure the translatability of literary texts by analysing parallel corpora and measuring the degree of freedom of the translations and the narrowness of the domain. We then explore the use of domain adaptation to translate a novel between two related languages, Spanish and Catalan. This is the first time that specific MT systems are built to translate novels. Our best system outperforms a strong baseline by 4.61 absolute points (9.38% relative) in terms of BLEU and is corroborated by other automatic evaluation metrics. We provide evidence that MT can be useful to assist with the translation of novels between closely-related languages, namely (i) the translations produced by our best system are equal to the ones produced by a professional human translator in almost 20% of cases with an additional 10% requiring at most 5 character edits, and (ii) a complementary human evaluation shows that over 60% of the translations are perceived to be of the same (or even higher) quality by native speakers.","label_nlp4sg":0,"task":null,"method":null,"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"This research is supported by the European Union Seventh Framework Programme FP7\/2007-2013 under grant agreement PIAP-GA-2012-324414 (Abu-MaTran) and by Science Foundation Ireland through the CNGL Programme (Grant 12\/CE\/I2267) in the ADAPT Centre (www.adaptcentre.ie) at Dublin City University.","year":2015,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} -{"ID":"zmandar-etal-2021-financial","url":"https:\/\/aclanthology.org\/2021.fnp-1.22","title":"The Financial Narrative Summarisation Shared Task FNS 2021","abstract":"This paper presents the results and findings of the Financial Narrative Summarisation Shared Task on summarising UK annual reports. The shared task was organised as part of the Financial Narrative Processing 2021 Workshop (FNP 2021 Workshop). The shared task included one main task which is the use of either abstractive or extractive automatic summarisers to summarise long documents in terms of UK financial annual reports. This shared task is the second to target financial documents. The data for the shared task was created and collected from publicly available UK annual reports published by firms listed on the London Stock Exchange. A total number of 10 systems from 5 different teams participated in the shared task. In addition, we had two baseline and two topline summarisers to help evaluate the results of the participating teams and compare them to the state-of-the-art systems.","label_nlp4sg":0,"task":null,"method":null,"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2021,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} -{"ID":"smart-2006-smart","url":"https:\/\/aclanthology.org\/2006.claw-1.2","title":"SMART Controlled English -- Paper and Demonstration","abstract":"The trend to globalization and \"outsourcing\" presents a major linguistic challenge. This paper presents a proven methodology to use SMART Controlled English to write technical documentation for global communications. Today, large corporations must adjust their business practices to communicate more effectively across all time zones and 80 languages. The use of SMART Controlled English, when coupled with Statistical Machine Translation (SMT), will become an ideal method to cross the language barrier. Introduction: The trend to globalization presents a major linguistic challenge for large and small companies. To add to this trend, most products require a high degree of computer literacy for operation and maintenance. For example, most automobiles are welded by robots, not humans. Also, the advent of \"outsourcing\" has expanded the ring of communications. The biggest problem is that most technical manuals are not written by professional technical writers, but engineers who are the subject matter experts. Many advanced products, like those found in the telecommunications industry, update their technology every six months. Today, many cell phone (mobile phone) users in China update their handsets every four months to get new features. Unknown to most users, the information needed to control ring tones is some 250,000 pages of complex software documentation. The instructions to repair a complex jet engine can amount to more than 500,000 pages. According to Boeing, if all their aircraft manuals where printed and stacked end-to-end, the stack would reach to the top of Mt. Everest and back. These mountains of manuals are further compounded by the need for language translations. For example, companies like Microsoft and IBM localize their software and documentation in 70 languages. A small company seeking compliance to the Economic Union directives is faced with 20 languages. The expansion of both NATO and the EU adds more languages. Unfortunately, the demand for professional technical translators far exceeds the supply. What is the solution? Many companies have found that a controlled language approach can reach across the language boundaries with a common language. This paper and on-line demonstration http:\/\/www.smartny.com\/ControlledEnglish\/CLAW06 shows how to create and use a Controlled English dictionary. Examples of Controlled English ASD-STE100 Simplified Technical English This example shows the original text on the left side and the simplification for global aerospace markets. Note the use of a bulleted list instead of a dense block of text. The Simplified Technical English is easier to read, write and learn as a second language. SMART Controlled English-Telecommunications Documentation This example shows the original text on the left and the Controlled English for a telecommunications product on the right. In this example, the gobbledygook is removed and technical information is easier to find and comprehend. SMART Controlled English-Medical Devices This example shows the original text on the left and the Controlled English for a medical device on the right. In this example, the original is written by an engineer then simplified for a service technician. The Controlled English offers a 30% saving in text and later localization costs.","label_nlp4sg":0,"task":null,"method":null,"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2006,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} -{"ID":"chiang-2004-uses","url":"https:\/\/aclanthology.org\/W04-3302","title":"Uses and abuses of intersected languages","abstract":"In this paper we discuss the use of intersection as a tool for modeling syntactic phenomena and folding of biological molecules. We argue that intersection is useful but easily overestimated, because intersection coordinates grammars via their string languages, and if strong generative capacity is given priority over weak generative capacity, this kind of coordination turns out to be rather limited. We give two example uses of intersection which overstep this limit, one using CFGs and one using a range concatenation grammar (RCG). We conclude with an analysis and example of the different kinds of parallelism available in an RCG.","label_nlp4sg":0,"task":null,"method":null,"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"This research was supported in part by NSF ITR grant EIA-02-05456. I would like to thank Julia Hockenmaier, Laura Kallmeyer, Aravind Joshi, and the anonymous reviewers for their valuable help. S. D. G.","year":2004,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} -{"ID":"bhaskar-2013-answering","url":"https:\/\/aclanthology.org\/R13-2003","title":"Answering Questions from Multiple Documents -- the Role of Multi-Document Summarization","abstract":"Ongoing research work on Question Answering using multi-document summarization has been described. It has two main sub modules, document retrieval and Multi-document Summarization. We first preprocess the documents and then index them using Nutch with NE field. Stop words are removed and NEs are tagged from each question and all remaining question words are stemmed and then retrieve the most relevant 10 documents. Now, document graph-based query focused multidocument summarizer is used where question words are used as query. A document graph is constructed, where the nodes are sentences of the documents and edge scores reflect the correlation measure between the nodes. The system clusters similar texts from the graph using this edge score. Each cluster gets a weight and has a cluster center. Next, question dependent weights are added to the corresponding cluster score. Top two-ranked sentences of each cluster is identified in order and compressed and then fused to a single sentence. The compressed and fused sentences are included into the output summary with a limit of 500 words, which is presented as answer. The system is tested on data set of INEX QA track from 2011 to 2013 and best readability score was achieved.","label_nlp4sg":0,"task":null,"method":null,"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"We acknowledge the support of the DeitY, MCIT, Govt. of India funded project \"Development of Cross Lingual Information Access (CLIA) System Phase II\".","year":2013,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} -{"ID":"lager-black-1994-bidirectional","url":"https:\/\/aclanthology.org\/W94-0327","title":"Bidirectional Incremental Generation and Analysis with Categorial Grammar and Indexed Quasi-Logical Form","abstract":"We describe an approach to surface generation designed for a \"pragmatics-based\" dialogue system. The implementation has been extended to deal with certain well-known difficulties with the underlying linguistic formalism (Categorial Grammar) at the same time yielding a system capable of supporting incremental generation as well as interpretation. Aspects of the formalism used for the initial description that constitutes the interface with the planning component are also discussed.","label_nlp4sg":0,"task":null,"method":null,"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":1994,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} -{"ID":"de-santo-2021-minimalist","url":"https:\/\/aclanthology.org\/2021.scil-1.1","title":"A Minimalist Approach to Facilitatory Effects in Stacked Relative Clauses","abstract":"A top-down parser for Minimalist grammars (MGs; Stabler, 2013) can successfully predict a variety of off-line processing preferences, via metrics linking parsing behavior to memory load (Kobele et al., 2013; Gerth, 2015; Graf et al., 2017). The increasing empirical coverage of this model is intriguing, given its close association to modern minimalist syntax. Recently however, Zhang (2017) has argued that this framework is unable to account for a set of complexity profiles reported for English and Mandarin Chinese stacked relative clauses. Based on these observations, this paper proposes extensions to this model implementing a notion of memory reactivation, in the form of memory metrics sensitive to repetitions of movement features. We then show how these metrics derive the correct predictions for the stacked RC processing contrasts.","label_nlp4sg":0,"task":null,"method":null,"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"I would like to thank Thomas Graf, Mark Aronoff, John Baylin, and Jon Sprouse for their feedback on different stages of this research. I am also grateful to the anonymous reviewer for their constructive comments and insights.","year":2021,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} -{"ID":"basili-etal-2004-a2q","url":"http:\/\/www.lrec-conf.org\/proceedings\/lrec2004\/pdf\/683.pdf","title":"A2Q: An Agent-based Architecure for Multilingual Q\\&A","abstract":"In this paper we describe the agent based architecture and extensively report the design of the shallow processing model in it. We present the general model describing the data flow and the expected activities that have to be carried out. The notion of question session will be introduced as a means to control the communication among the different agents. We then present a shallow model mainly based on an IR engine and a passage re-ranking that uses the notion of expanded query. We will report the pilot investigation on the performances of the method.","label_nlp4sg":0,"task":null,"method":null,"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2004,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} -{"ID":"lestrade-2006-marked","url":"https:\/\/aclanthology.org\/W06-2104","title":"Marked Adpositions","abstract":"This paper discusses the partitive-genitive case alternation of Finnish adpositions. This case alternation is explained in terms of bidirectional alignment of markedness in form and meaning. Marked PP meanings are assigned partitive case, unmarked ones genitive case.","label_nlp4sg":0,"task":null,"method":null,"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2006,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} -{"ID":"schwartz-etal-2017-effect","url":"https:\/\/aclanthology.org\/K17-1004","title":"The Effect of Different Writing Tasks on Linguistic Style: A Case Study of the ROC Story Cloze Task","abstract":"A writer's style depends not just on personal traits but also on her intent and mental state. In this paper, we show how variants of the same writing task can lead to measurable differences in writing style. We present a case study based on the story cloze task (Mostafazadeh et al., 2016a), where annotators were assigned similar writing tasks with different constraints: (1) writing an entire story, (2) adding a story ending for a given story context, and (3) adding an incoherent ending to a story. We show that a simple linear classifier informed by stylistic features is able to successfully distinguish among the three cases, without even looking at the story context. In addition, combining our stylistic features with language model predictions reaches state of the art performance on the story cloze challenge. Our results demonstrate that different task framings can dramatically affect the way people write. 1","label_nlp4sg":0,"task":null,"method":null,"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"The authors thank Chenhao Tan, Luke Zettlemoyer, Rik Koncel-Kedziorski, Rowan Zellers, Yangfeng Ji and several anonymous reviewers for helpful feedback. This research was supported in part by Darpa CwC program through ARO (W911NF-15-1-0543), Samsung GRO, NSF IIS-1524371, and gifts from Google and Facebook.","year":2017,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} -{"ID":"garain-etal-2020-junlp","url":"https:\/\/aclanthology.org\/2020.semeval-1.171","title":"JUNLP at SemEval-2020 Task 9: Sentiment Analysis of Hindi-English Code Mixed Data Using Grid Search Cross Validation","abstract":"Code-mixing is a phenomenon which arises mainly in multilingual societies. Multilingual people, who are well versed in their native languages and also English speakers, tend to code-mix using English-based phonetic typing and the insertion of anglicisms in their main language. This linguistic phenomenon poses a great challenge to conventional NLP domains such as Sentiment Analysis, Machine Translation, and Text Summarization, to name a few. In this work, we focus on working out a plausible solution to the domain of Code-Mixed Sentiment Analysis. This work was done as participation in the SemEval-2020 Sentimix Task, where we focused on the sentiment analysis of English-Hindi code-mixed sentences. our username for the submission was \"sainik.mahata\" and team name was \"JUNLP\". We used feature extraction algorithms in conjunction with traditional machine learning algorithms such as SVR and Grid Search in an attempt to solve the task. Our approach garnered an f1-score of 66.2% when tested using metrics prepared by the organizers of the task.","label_nlp4sg":0,"task":null,"method":null,"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2020,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} -{"ID":"baldwin-chai-2012-autonomous","url":"https:\/\/aclanthology.org\/N12-1089","title":"Autonomous Self-Assessment of Autocorrections: Exploring Text Message Dialogues","abstract":"Text input aids such as automatic correction systems play an increasingly important role in facilitating fast text entry and efficient communication between text message users. Although these tools are beneficial when they work correctly, they can cause significant communication problems when they fail. To improve its autocorrection performance, it is important for the system to have the capability to assess its own performance and learn from its mistakes. To address this, this paper presents a novel task of self-assessment of autocorrection performance based on interactions between text message users. As part of this investigation, we collected a dataset of autocorrection mistakes from true text message users and experimented with a rich set of features in our self-assessment task. Our experimental results indicate that there are salient cues from the text message discourse that allow systems to assess their own behaviors with high precision.","label_nlp4sg":0,"task":null,"method":null,"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"This work was supported in part by Award #0957039 from the National Science Foundation and Award #N00014-11-1-0410 from the Office of Naval Research. The authors would like to thank the reviewers for their valuable comments and suggestions.","year":2012,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} -{"ID":"tate-voss-2006-combining","url":"https:\/\/aclanthology.org\/2006.amta-papers.27","title":"Combining Evaluation Metrics via Loss Functions","abstract":"When response metrics for evaluating the utility of machine translation (MT) output on a given task do not yield a single ranking of MT engines, how are MT users to decide which engine best supports their task? When the cost of different types of response errors vary, how are MT users to factor that information into their rankings? What impact do different costs have on response-based rankings? Starting with data from an extraction experiment detailed in Voss & Tate (2006), this paper describes three response-rate metrics developed to quantify different aspects of MT users' performance identifying who\/when\/where-items in MT output, and then presents a loss function analysis over these rates to derive a single customizable metric, applying a range of values to correct responses and costs to different error types. For the given experimental dataset, loss function analyses provided a clearer characterization of the engines' relative strength than did comparing the response rates to each other. For one MT engine, varying the costs had no impact: the engine consistently ranked best. By contrast, cost variations did impact the ranking of the other two engines: a rank reversal occurred on who-item extractions when incorrect responses were penalized more than non-responses. Future work with loss analysis, developing operational cost ratios of error rates to correct response rates, will require user studies and expert documentscreening personnel to establish baseline values for effective MT engine support on wh-item extraction.","label_nlp4sg":0,"task":null,"method":null,"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"Several individuals contributed to the task-based evaluation research project, including Eric Slud (Dept. of Mathematics, U. of Maryland, College Park), Matthew Aguirre, John Hancock (Artis-Tech, Inc.), Jamal Laoudi, Sooyon Lee (ARTI), and Somiya Shukla, Joi Turner, and Michelle Vanni (ARL). This project was funded in part by the Center for Advanced Study of Language (CASL) at the University of Maryland.","year":2006,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} -{"ID":"alagic-snajder-2016-cro36wsd","url":"https:\/\/aclanthology.org\/L16-1267","title":"Cro36WSD: A Lexical Sample for Croatian Word Sense Disambiguation","abstract":"We introduce Cro36WSD, a freely-available medium-sized lexical sample for Croatian word sense disambiguation (WSD). Cro36WSD comprises 36 words: 12 adjectives, 12 nouns, and 12 verbs, balanced across both frequency bands and polysemy levels. We adopt the multi-label annotation scheme in the hope of lessening the drawbacks of discrete sense inventories and obtaining more realistic annotations from human experts. Sense-annotated data is collected through multiple annotation rounds to ensure high-quality annotations: with a 115 person-hours effort we reached an inter-annotator agreement score of 0.877. We analyze the obtained data and perform a correlation analysis between several relevant variables, including word frequency, number of senses, sense distribution skewness, average annotation time, and the observed inter-annotator agreement (IAA). Using the obtained data, we compile multi-and single-labeled dataset variants using different label aggregation schemes. Finally, we evaluate three different baseline WSD models on both dataset variants and report on the insights gained. We make both dataset variants freely available.","label_nlp4sg":0,"task":null,"method":null,"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"This work has been fully supported by the Croatian Science Foundation under the project UIP-2014-09-7312.","year":2016,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} -{"ID":"michael-etal-2018-crowdsourcing","url":"https:\/\/aclanthology.org\/N18-2089","title":"Crowdsourcing Question-Answer Meaning Representations","abstract":"We introduce Question-Answer Meaning Representations (QAMRs), which represent the predicate-argument structure of a sentence as a set of question-answer pairs. We develop a crowdsourcing scheme to show that QAMRs can be labeled with very little training, and gather a dataset with over 5,000 sentences and 100,000 questions. A qualitative analysis demonstrates that the crowd-generated questionanswer pairs cover the vast majority of predicate-argument relationships in existing datasets (including PropBank, Nom-Bank, and QA-SRL) along with many previously under-resourced ones, including implicit arguments and relations. We also report baseline models for question generation and answering, and summarize a recent approach for using QAMR labels to improve an Open IE system. These results suggest the freely available 1 QAMR data and annotation scheme should support significant future work. * Work performed while at Bar-Ilan University. 1 github.com\/uwnlp\/qamr Pierre Vinken, 61 years old, will join the board as a nonexecutive director Nov. 29. Who will join as nonexecutive director?-Pierre Vinken What is Pierre's last name?-Vinken Who is 61 years old?-Pierre Vinken How old is Pierre Vinken?-61 years old What will he join?-the board What will he join the board as?-nonexecutive director What type of director will Vinken be?-nonexecutive What day will Vinken join the board?-Nov. 29","label_nlp4sg":0,"task":null,"method":null,"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"This work was supported in part by grants from the MAGNET program of the Israeli Office of the Chief Scientist (OCS); the German Research Foundation through the German-Israeli Project Cooperation (DIP, grant DA 1600\/1-1); the Israel Science Foundation (grant No. 1157\/16); the US NSF (IIS1252835,IIS-1562364); and an Allen Distinguished Investigator Award.","year":2018,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} -{"ID":"wei-gulla-2011-enhancing","url":"https:\/\/aclanthology.org\/I11-1037","title":"Enhancing the HL-SOT Approach to Sentiment Analysis via a Localized Feature Selection Framework","abstract":"In this paper, we propose a Localized Feature Selection (LFS) framework tailored to the HL-SOT approach to sentiment analysis. Within the proposed LFS framework, each node classifier of the HL-SOT approach is able to perform classification on target texts in a locally customized index term space. Extensive empirical analysis against a human-labeled data set demonstrates that with the proposed LFS framework the classification performance of the HL-SOT approach is enhanced with computational efficiency being greatly gained. To find the best feature selection algorithm that caters to the proposed LFS framework, five classic feature selection algorithms are comparatively studied, which indicates that the TS, DF, and MI algorithms achieve generally better performances than the CHI and IG algorithms. Among the five studied algorithms, the T-S algorithm is best to be employed by the proposed LFS framework.","label_nlp4sg":0,"task":null,"method":null,"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"The authors would like to thank the anonymous reviewers for the helpful comments on the manuscript. This work is funded by the Research Council of Norway under the VERDIKT research programme (Project No.: 183337).","year":2011,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} -{"ID":"rama-coltekin-2018-tubingen","url":"https:\/\/aclanthology.org\/K18-3014","title":"T\\\"ubingen-Oslo system at SIGMORPHON shared task on morphological inflection. A multi-tasking multilingual sequence to sequence model.","abstract":"In this paper, we describe our three submissions to the inflection track of SIGMORPHON shared task. We experimented with three models: namely, sequence to sequence model (popularly known as seq2seq), seq2seq model with data augmentation, and a multilingual multi-tasking seq2seq model that is multilingual in nature. Our results with the multilingual model are below the baseline in the case of both high and medium datasets.","label_nlp4sg":0,"task":null,"method":null,"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"The authors thank Ryan Cotterell and the rest of the organizers for the encouragement to participate in the shared task when participating on a short notice. The first author is supported by BIGMED project (a NRC Lighthouse grant) which is gratefully acknowledged. Some of the experiments reported in this paper are run on a Titan Xp donated by the NVIDIA Corporation.","year":2018,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} -{"ID":"forcada-2002-using","url":"https:\/\/aclanthology.org\/2002.tmi-tmiw.3","title":"Using multilingual content on the web to build fast finite-state direct translation systems","abstract":"In this paper I try to identify and describe in certain detail a possible avenue of research in machine translation: the use of existing multilingual content on the web and finite-state technology to automatically build and maintain fast web-based direct machine translation systems, especially for language pairs lacking machine translation resources. The term direct is used to refer to systems performing no linguistic analysis, working similarly to pretranslators based on translation memories. Considering the current state of the art of (a) web mining for bitexts, (b) bitext alignment techniques, and (c) finite-state theory and implementation, I discuss their integration toward the stated goal and sketch some of the remaining challenges. The objective on the horizon is a web-based translation service exploiting the multilingual content already present on the web.","label_nlp4sg":0,"task":null,"method":null,"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"Acknowledgements: Partial support from the Spanish Comisi\u00f3n Interministerial de Ciencia y Tecnologia through project TIC2000-1599-C02-02 is acknowledged. Thanks go to Juan Antonio P\u00e9rez-Ortiz for useful discussions.","year":2002,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} -{"ID":"skurniak-etal-2018-multi","url":"https:\/\/aclanthology.org\/W18-0917","title":"Multi-Module Recurrent Neural Networks with Transfer Learning","abstract":"This paper describes multiple solutions designed and tested for the problem of wordlevel metaphor detection. The proposed systems are all based on variants of recurrent neural network architectures. Specifically, we explore multiple sources of information: pretrained word embeddings (Glove), a dictionary of language concreteness and a transfer learning scenario based on the states of an encoder network from neural network machine translation system. One of the architectures is based on combining all three systems: (1) Neural CRF (Conditional Random Fields), trained directly on the metaphor data set; (2) Neural Machine Translation encoder of a transfer learning scenario; (3) a neural network used to predict final labels, trained directly on the metaphor data set. Our results vary between test sets: Neural CRF standalone is the best one on submission data, while combined system scores the highest on a test subset randomly selected from training data.","label_nlp4sg":0,"task":null,"method":null,"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2018,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} -{"ID":"cuadros-etal-2010-integrating","url":"http:\/\/www.lrec-conf.org\/proceedings\/lrec2010\/pdf\/703_Paper.pdf","title":"Integrating a Large Domain Ontology of Species into WordNet","abstract":"With the proliferation of applications sharing information represented in multiple ontologies, the development of automatic methods for robust and accurate ontology matching will be crucial to their success. Connecting and merging already existing semantic networks is perhaps one of the most challenging task related to knowledge engineering. This paper presents a new approach for aligning automatically a very large domain ontology of Species to WordNet in the framework of the KYOTO project. The approach relies on the use of knowledge-based Word Sense Disambiguation algorithm which accurately assigns WordNet synsets to the concepts represented in Species 2000.","label_nlp4sg":0,"task":null,"method":null,"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"This work has been supported by KNOW-2 (TIN2009-14715-C04-01 and TIN2009-14715-C04-04) and KYOTO (ICT-2007-211423). We want to thank the anonymous reviewers for their valuable comments.","year":2010,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} -{"ID":"grissom-ii-etal-2014-dont","url":"https:\/\/aclanthology.org\/D14-1140","title":"Don't Until the Final Verb Wait: Reinforcement Learning for Simultaneous Machine Translation","abstract":"We introduce a reinforcement learningbased approach to simultaneous machine translation-producing a translation while receiving input wordsbetween languages with drastically different word orders: from verb-final languages (e.g., German) to verb-medial languages (English). In traditional machine translation, a translator must \"wait\" for source material to appear before translation begins. We remove this bottleneck by predicting the final verb in advance. We use reinforcement learning to learn when to trust predictions about unseen, future portions of the sentence. We also introduce an evaluation metric to measure expeditiousness and quality. We show that our new translation model outperforms batch and monotone translation strategies.","label_nlp4sg":0,"task":null,"method":null,"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"We thank the anonymous reviewers, as well as Yusuke Miyao, Naho Orita, Doug Oard, and Sudha Rao for their insightful comments. This work was supported by NSF Grant IIS-1320538. Boyd-Graber is also partially supported by NSF Grant CCF-1018625. Daum\u00e9 III and He are also partially supported by NSF Grant IIS-0964681. Any opinions, findings, conclusions, or recommendations expressed here are those of the authors and do not necessarily reflect the view of the sponsor.","year":2014,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} -{"ID":"song-etal-2019-leveraging","url":"https:\/\/aclanthology.org\/D19-1020","title":"Leveraging Dependency Forest for Neural Medical Relation Extraction","abstract":"Medical relation extraction discovers relations between entity mentions in text, such as research articles. For this task, dependency syntax has been recognized as a crucial source of features. Yet in the medical domain, 1best parse trees suffer from relatively low accuracies, diminishing their usefulness. We investigate a method to alleviate this problem by utilizing dependency forests. Forests contain many possible decisions and therefore have higher recall but more noise compared with 1-best outputs. A graph neural network is used to represent the forests, automatically distinguishing the useful syntactic information from parsing noise. Results on two biomedical benchmarks show that our method outperforms the standard tree-based methods, giving the state-of-the-art results in the literature.","label_nlp4sg":1,"task":null,"method":null,"goal1":"Good Health and Well-Being","goal2":null,"goal3":null,"acknowledgments":"Acknowledgments Research supported by NSF award IIS-1813823.","year":2019,"sdg1":0,"sdg2":0,"sdg3":1,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} -{"ID":"mukherjee-kubler-2017-similarity","url":"https:\/\/doi.org\/10.26615\/978-954-452-049-6_068","title":"Similarity Based Genre Identification for POS Tagging Experts \\& Dependency Parsing","abstract":null,"label_nlp4sg":0,"task":null,"method":null,"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2017,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} -{"ID":"ansell-etal-2021-mad-g","url":"https:\/\/aclanthology.org\/2021.findings-emnlp.410","title":"MAD-G: Multilingual Adapter Generation for Efficient Cross-Lingual Transfer","abstract":"Adapter modules have emerged as a general parameter-efficient means to specialize a pretrained encoder to new domains. Massively multilingual transformers (MMTs) have particularly benefited from additional training of language-specific adapters. However, this approach is not viable for the vast majority of languages, due to limitations in their corpus size or compute budgets. In this work, we propose MAD-G (Multilingual ADapter Generation), which contextually generates language adapters from language representations based on typological features. In contrast to prior work, our time-and space-efficient MAD-G approach enables (1) sharing of linguistic knowledge across languages and (2) zero-shot inference by generating language adapters for unseen languages. We thoroughly evaluate MAD-G in zero-shot crosslingual transfer on part-of-speech tagging, dependency parsing, and named entity recognition. While offering (1) improved fine-tuning efficiency (by a factor of around 50 in our experiments), (2) a smaller parameter budget, and (3) increased language coverage, MAD-G remains competitive with more expensive methods for language-specific adapter training across the board. Moreover, it offers substantial benefits for low-resource languages, particularly on the NER task in low-resource African languages. Finally, we demonstrate that MAD-G's transfer performance can be further improved via: (i) multi-source training, i.e., by generating and combining adapters of multiple languages with available taskspecific training data; and (ii) by further finetuning generated MAD-G adapters for languages with monolingual data.","label_nlp4sg":0,"task":null,"method":null,"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"Alan wishes to thank David and Claudia Harding for their generous support via the Harding Distinguished Postgraduate Scholarship Programme. Jonas is supported by the LOEWE initiative (Hesse, Germany) within the emergenCITY center. Goran is supported by the KI-Innovation grant Multi2ConvAI of the Baden-W\u00fcrttemberg's Ministry of Economics, Labor and Tourism. Anna and Ivan are supported by the ERC Grant LEXI-CAL (no. 648909) and the ERC PoC Grant Multi-ConvAI (no. 957356).","year":2021,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} -{"ID":"moreno-etal-2004-collection","url":"http:\/\/www.lrec-conf.org\/proceedings\/lrec2004\/pdf\/325.pdf","title":"Collection of SLR in the Asian-Pacific Area","abstract":"The goal of this project (LILA) is the collection of a large number of spoken databases for training Automatic Speech Recognition Systems for telephone applications in the Asian Pacific area. Specifications follow those of SpeechDat-like databases. Utterances will be recorded directly from calls made either from fixed or cellular telephones and are composed by read text and answers to specific questions. The project is driven by a consortium composed by a large number of industrial companies. Each company is in charge of the production of two databases. The consortium shares the databases produced in the project. The goal of the project should be reached within the year 2005.","label_nlp4sg":0,"task":null,"method":null,"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2004,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} -{"ID":"gao-suzuki-2003-unsupervised","url":"https:\/\/aclanthology.org\/P03-1066","title":"Unsupervised Learning of Dependency Structure for Language Modeling","abstract":"This paper presents a dependency language model (DLM) that captures linguistic constraints via a dependency structure, i.e., a set of probabilistic dependencies that express the relations between headwords of each phrase in a sentence by an acyclic, planar, undirected graph. Our contributions are threefold. First, we incorporate the dependency structure into an n-gram language model to capture long distance word dependency. Second, we present an unsupervised learning method that discovers the dependency structure of a sentence using a bootstrapping procedure. Finally, we evaluate the proposed models on a realistic application (Japanese Kana-Kanji conversion). Experiments show that the best DLM achieves an 11.3% error rate reduction over the word trigram model.","label_nlp4sg":0,"task":null,"method":null,"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2003,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} -{"ID":"bach-etal-2022-promptsource","url":"https:\/\/aclanthology.org\/2022.acl-demo.9","title":"PromptSource: An Integrated Development Environment and Repository for Natural Language Prompts","abstract":"PromptSource is a system for creating, sharing, and using natural language prompts. Prompts are functions that map an example from a dataset to a natural language input and target output. Using prompts to train and query language models is an emerging area in NLP that requires new tools that let users develop and refine these prompts collaboratively. PromptSource addresses the emergent challenges in this new setting with (1) a templating language for defining data-linked prompts, (2) an interface that lets users quickly iterate on prompt development by observing outputs of their prompts on many examples, and (3) a community-driven set of guidelines for contributing new prompts to a common pool. Over 2,000 prompts for roughly 170 datasets are already available in PromptSource.","label_nlp4sg":0,"task":null,"method":null,"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"This research was conducted under the BigScience project for open research, 4 a year-long initiative targeting the study of large models and datasets. The goal of the project is to research language models in a public environment outside large technology companies. The project has over 950 researchers from over 65 countries and more than 250 institutions. The BigScience project was initiated by Thomas Wolf at Hugging Face, and this collaboration would not have been possible without his effort. This research was the focus of the BigScience Prompt Engineering working group, which focused on the role of prompting in large language model training. Disclosure: Stephen Bach contributed to this work as an advisor to Snorkel AI.","year":2022,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} -{"ID":"toh-wang-2014-dlirec","url":"https:\/\/aclanthology.org\/S14-2038","title":"DLIREC: Aspect Term Extraction and Term Polarity Classification System","abstract":"This paper describes our system used in the Aspect Based Sentiment Analysis Task 4 at the SemEval-2014. Our system consists of two components to address two of the subtasks respectively: a Conditional Random Field (CRF) based classifier for Aspect Term Extraction (ATE) and a linear classifier for Aspect Term Polarity Classification (ATP). For the ATE subtask, we implement a variety of lexicon, syntactic and semantic features, as well as cluster features induced from unlabeled data. Our system achieves state-of-the-art performances in ATE, ranking 1st (among 28 submissions) and 2rd (among 27 submissions) for the restaurant and laptop domain respectively. This work is licensed under a Creative Commons Attribution 4.0 International Licence.","label_nlp4sg":0,"task":null,"method":null,"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"This research work is supported by a research project under Baidu-I 2 R Research Centre.","year":2014,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} -{"ID":"pustejovsky-etal-2019-modeling","url":"https:\/\/aclanthology.org\/W19-3303","title":"Modeling Quantification and Scope in Abstract Meaning Representations","abstract":"In this paper, we propose an extension to Abstract Meaning Representations (AMRs) to encode scope information of quantifiers and negation, in a way that overcomes the semantic gaps of the schema while maintaining its cognitive simplicity. Specifically, we address three phenomena not previously part of the AMR specification: quantification, negation (generally), and modality. The resulting representation, which we call \"Uniform Meaning Representation\" (UMR), adopts the predicative core of AMR and embeds it under a \"scope\" graph when appropriate. UMR representations differ from other treatments of quantification and modal scope phenomena in two ways: (a) they are more transparent; and (b) they specify default scope when possible.","label_nlp4sg":0,"task":null,"method":null,"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"We would like to thank the anonymous reviewers for their helpful comments. This work is supported by the IIS Division of National Science Foundation via Award No. 1763926 entitled \"Building a Uniform Meaning Representation for Natural Language Processing\". All views expressed in this paper are those of the authors and do not necessarily represent the view of the National Science Foundation.","year":2019,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} -{"ID":"santus-etal-2014-chasing","url":"https:\/\/aclanthology.org\/E14-4008","title":"Chasing Hypernyms in Vector Spaces with Entropy","abstract":"In this paper, we introduce SLQS, a new entropy-based measure for the unsupervised identification of hypernymy and its directionality in Distributional Semantic Models (DSMs). SLQS is assessed through two tasks: (i.) identifying the hypernym in hyponym-hypernym pairs, and (ii.) discriminating hypernymy among various semantic relations. In both tasks, SLQS outperforms other state-of-the-art measures.","label_nlp4sg":0,"task":null,"method":null,"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2014,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} -{"ID":"schlangen-2021-targeting","url":"https:\/\/aclanthology.org\/2021.acl-short.85","title":"Targeting the Benchmark: On Methodology in Current Natural Language Processing Research","abstract":"It has become a common pattern in our field: One group introduces a language task, exemplified by a dataset, which they argue is challenging enough to serve as a benchmark. They also provide a baseline model for it, which then soon is improved upon by other groups. Often, research efforts then move on, and the pattern repeats itself. What is typically left implicit is the argumentation for why this constitutes progress, and progress towards what. In this paper, I try to step back for a moment from this pattern and work out possible argumentations and their parts.","label_nlp4sg":0,"task":null,"method":null,"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2021,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} -{"ID":"frermann-etal-2014-hierarchical","url":"https:\/\/aclanthology.org\/E14-1006","title":"A Hierarchical Bayesian Model for Unsupervised Induction of Script Knowledge","abstract":"Scripts representing common sense knowledge about stereotyped sequences of events have been shown to be a valuable resource for NLP applications. We present a hierarchical Bayesian model for unsupervised learning of script knowledge from crowdsourced descriptions of human activities. Events and constraints on event ordering are induced jointly in one unified framework. We use a statistical model over permutations which captures event ordering constraints in a more flexible way than previous approaches. In order to alleviate the sparsity problem caused by using relatively small datasets, we incorporate in our hierarchical model an informed prior on word distributions. The resulting model substantially outperforms a state-of-the-art method on the event ordering task.","label_nlp4sg":0,"task":null,"method":null,"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"We thank Michaela Regneri for substantial support with the script data, and Mirella Lapata for helpful comments.","year":2014,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} -{"ID":"clinchant-perronnin-2013-aggregating","url":"https:\/\/aclanthology.org\/W13-3212","title":"Aggregating Continuous Word Embeddings for Information Retrieval","abstract":"While words in documents are generally treated as discrete entities, they can be embedded in a Euclidean space which reflects an a priori notion of similarity between them. In such a case, a text document can be viewed as a bag-ofembedded-words (BoEW): a set of realvalued vectors. We propose a novel document representation based on such continuous word embeddings. It consists in non-linearly mapping the wordembeddings in a higher-dimensional space and in aggregating them into a documentlevel representation. We report retrieval and clustering experiments in the case where the word-embeddings are computed from standard topic models showing significant improvements with respect to the original topic models.","label_nlp4sg":0,"task":null,"method":null,"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2013,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} -{"ID":"fang-etal-2005-web","url":"https:\/\/aclanthology.org\/I05-1087","title":"Web-Based Terminology Translation Mining","abstract":"Mining terminology translation from a large amount of Web data can be applied in many fields such as reading\/writing assistant, machine translation and cross-language information retrieval. How to find more comprehensive results from the Web and obtain the boundary of candidate translations, and how to remove irrelevant noises and rank the remained candidates are the challenging issues. In this paper, after reviewing and analyzing all possible methods of acquiring translations, a feasible statistics-based method is proposed to mine terminology translation from the Web. In the proposed method, on the basis of an analysis of different forms of term translation distributions, character-based string frequency estimation is presented to construct term translation candidates for exploring more translations and their boundaries, and then sort-based subset deletion and mutual information methods are respectively proposed to deal with subset redundancy information and prefix\/suffix redundancy information formed in the process of estimation. Extensive experiments on two test sets of 401 and 3511 English terms validate that our system has better performance.","label_nlp4sg":0,"task":null,"method":null,"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2005,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} -{"ID":"gildea-etal-2018-acl","url":"https:\/\/aclanthology.org\/W18-2504","title":"The ACL Anthology: Current State and Future Directions","abstract":"The Association of Computational Linguistic's Anthology is the open source archive, and the main source for computational linguistics and natural language processing's scientific literature. The ACL Anthology is currently maintained exclusively by community volunteers and has to be available and up-to-date at all times. We first discuss the current, open source approach used to achieve this, and then discuss how the planned use of Docker images will improve the Anthology's longterm stability. This change will make it easier for researchers to utilize Anthology data for experimentation. We believe the ACL community can directly benefit from the extension-friendly architecture of the Anthology. We end by issuing an open challenge of reviewer matching we encourage the community to rally towards.","label_nlp4sg":0,"task":null,"method":null,"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2018,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} -{"ID":"shavrina-etal-2020-humans","url":"https:\/\/aclanthology.org\/2020.lrec-1.277","title":"Humans Keep It One Hundred: an Overview of AI Journey","abstract":"Artificial General Intelligence (AGI) is showing growing performance in numerous applications-beating human performance in Chess and Go, using knowledge bases and text sources to answer questions and even pass school student examination. In this paper, we describe the results of AI Journey, a competition of AI-systems aimed to improve AI performance on linguistic knowledge evaluation, reasoning and text generation. Competing systems have passed Unified State Exam (USE, in Russian), including versatile grammar tasks (test and open questions) and an essay: a combined solution consisting of the best performing models have achieved a high score of 69%, with 68% being an average human result. During the competition, a baseline for the task and essay parts was proposed, and 98 systems were submitted, showing different approaches to task solving and reasoning. All the data and solutions can be found on github","label_nlp4sg":0,"task":null,"method":null,"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2020,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} -{"ID":"amidei-etal-2018-rethinking","url":"https:\/\/aclanthology.org\/C18-1281","title":"Rethinking the Agreement in Human Evaluation Tasks","abstract":"Human evaluations are broadly thought to be more valuable the higher the inter-annotator agreement. In this paper we examine this idea. We will describe our experiments and analysis within the area of Automatic Question Generation. Our experiments show how annotators diverge in language annotation tasks due to a range of ineliminable factors. For this reason, we believe that annotation schemes for natural language generation tasks that are aimed at evaluating language quality need to be treated with great care. In particular, an unchecked focus on reduction of disagreement among annotators runs the danger of creating generation goals that reward output that is more distant from, rather than closer to, natural human-like language. We conclude the paper by suggesting a new approach to the use of the agreement metrics in natural language generation evaluation tasks.","label_nlp4sg":0,"task":null,"method":null,"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"We warmly thank Erika Renedo Illarregi, Luisa Ruge, German Ruiz Marcos, Suraj Pandey, Simon Cutajar, Neil Smith and Robin Laney for taking part in the experiments and sharing with us opinions and feedback. We would also thanks Karen Mazidi to give us the login access to her online Question Generator. We finally thanks the anonymous reviewers for their helpful suggestions.","year":2018,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} -{"ID":"guha-etal-2015-removing","url":"https:\/\/aclanthology.org\/N15-1117","title":"Removing the Training Wheels: A Coreference Dataset that Entertains Humans and Challenges Computers","abstract":"Coreference is a core nlp problem. However, newswire data, the primary source of existing coreference data, lack the richness necessary to truly solve coreference. We present a new domain with denser references-quiz bowl questions-that is challenging and enjoyable to humans, and we use the quiz bowl community to develop a new coreference dataset, together with an annotation framework that can tag any text data with coreferences and named entities. We also successfully integrate active learning into this annotation pipeline to collect documents maximally useful to coreference models. State-of-the-art coreference systems underperform a simple classifier on our new dataset, motivating non-newswire data for future coreference research.","label_nlp4sg":0,"task":null,"method":null,"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"We thank the anonymous reviewers for their insightful comments. We also thank Dr. Hal Daum\u00e9 III and the members of the \"feetthinking\" research group for their advice and assistance. We also thank Dr. Yuening Hu and Mossaab Bagdouri for their help in reviewing the draft of this paper. This work was supported by nsf Grant IIS-1320538. Boyd-Graber is also supported by nsf Grants CCF-1018625 and NCSE-1422492. Any opinions, findings, results, or recommendations expressed here are of the authors and do not necessarily reflect the view of the sponsor.","year":2015,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} -{"ID":"cardon-grabar-2020-reducing","url":"https:\/\/aclanthology.org\/2020.bucc-1.7","title":"Reducing the Search Space for Parallel Sentences in Comparable Corpora","abstract":"This paper describes and evaluates three methods for reducing the research space for parallel sentences in monolingual comparable corpora. Basically, when searching for parallel sentences between two comparable documents, all the possible sentence pairs between the documents have to be considered, which introduces a great degree of imbalance between parallel pairs and non-parallel pairs. This is a problem because, even with a highly performing algorithm, a lot of noise will be present in the extracted results, thus introducing a need for an extensive and costly manual check phase. We propose to study how we can drastically reduce the number of sentence pairs that have to be fed to a classifier so that the results can be manually handled. We work on a manually annotated subset obtained from a French comparable corpus.","label_nlp4sg":0,"task":null,"method":null,"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"We would like to thank the reviewers for their comments. This work was funded by the French National Agency for Research (ANR) as part of the CLEAR project (Communication, Literacy, Education, Accessibility, Readability), ANR-17-CE19-0016-01.","year":2020,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} -{"ID":"armengol-estape-etal-2021-multilingual","url":"https:\/\/aclanthology.org\/2021.findings-acl.437","title":"Are Multilingual Models the Best Choice for Moderately Under-resourced Languages? A Comprehensive Assessment for Catalan","abstract":"Multilingual language models have been a crucial breakthrough as they considerably reduce the need of data for under-resourced languages. Nevertheless, the superiority of language-specific models has already been proven for languages having access to large amounts of data. In this work, we focus on Catalan with the aim to explore to what extent a medium-sized monolingual language model is competitive with state-of-the-art large multilingual models. For this, we: (1) build a clean, high-quality textual Catalan corpus (CaText), the largest to date (but only a fraction of the usual size of the previous work in monolingual language models), (2) train a Transformerbased language model for Catalan (BERTa), and (3) devise a thorough evaluation in a diversity of settings, comprising a complete array of downstream tasks, namely, Part of Speech Tagging, Named Entity Recognition and Classification, Text Classification, Question Answering, and Semantic Textual Similarity, with most of the corresponding datasets being created ex novo. The result is a new benchmark, the Catalan Language Understanding Benchmark (CLUB), which we publish as an open resource, together with the clean textual corpus, the language model, and the cleaning pipeline. Using state-of-the-art multilingual models and a monolingual model trained only on Wikipedia as baselines, we consistently observe the superiority of our model across tasks and settings.","label_nlp4sg":0,"task":null,"method":null,"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"This work was partially funded by the Generalitat de Catalunya through the project PDAD14\/20\/00001, the State Secretariat for Digitalization and Artificial Intelligence (SEDIA) within the framework of the Plan TL, 27 the MT4All CEF project, 28 and the Future of Computing Center, a Barcelona Supercomputing Center and IBM initiative (2020).We thank all the reviewers for their valuable comments.","year":2021,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} -{"ID":"hu-etal-2009-contrasting","url":"https:\/\/aclanthology.org\/W09-3953","title":"Contrasting the Interaction Structure of an Email and a Telephone Corpus: A Machine Learning Approach to Annotation of Dialogue Function Units","abstract":"We present a dialogue annotation scheme for both spoken and written interaction, and use it in a telephone transaction corpus and an email corpus. We train classifiers, comparing regular SVM and structured SVM against a heuristic baseline. We provide a novel application of structured SVM to predicting relations between instance pairs.","label_nlp4sg":0,"task":null,"method":null,"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2009,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} -{"ID":"xing-etal-2020-improving","url":"https:\/\/aclanthology.org\/2020.aacl-main.63","title":"Improving Context Modeling in Neural Topic Segmentation","abstract":"Topic segmentation is critical in key NLP tasks and recent works favor highly effective neural supervised approaches. However, current neural solutions are arguably limited in how they model context. In this paper, we enhance a segmenter based on a hierarchical attention BiLSTM network to better model context, by adding a coherence-related auxiliary task and restricted self-attention. Our optimized segmenter 1 outperforms SOTA approaches when trained and tested on three datasets. We also the robustness of our proposed model in domain transfer setting by training a model on a large-scale dataset and testing it on four challenging real-world benchmarks. Furthermore, we apply our proposed strategy to two other languages (German and Chinese), and show its effectiveness in multilingual scenarios.","label_nlp4sg":0,"task":null,"method":null,"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"We thank the anonymous reviewers and the UBC-NLP group for their insightful comments.","year":2020,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} -{"ID":"stenchikova-etal-2007-ravencalendar","url":"https:\/\/aclanthology.org\/N07-4008","title":"RavenCalendar: A Multimodal Dialog System for Managing a Personal Calendar","abstract":"Dialog applications for managing calendars have been developed for every generation of dialog systems research (Heidorn, 1978; Yankelovich, 1994; Constantinides and others, 1998; Horvitz and Paek, 2000; Vo and Wood, 1996; Huang and others, 2001 ). Today, Web-based calendar applications are widely used. A spoken dialog interface to a Web-based calendar application permits convenient use of the system on a handheld device or over the telephone.","label_nlp4sg":0,"task":null,"method":null,"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2007,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} -{"ID":"hiraoka-etal-2019-stochastic","url":"https:\/\/aclanthology.org\/P19-1158","title":"Stochastic Tokenization with a Language Model for Neural Text Classification","abstract":"For unsegmented languages such as Japanese and Chinese, tokenization of a sentence has a significant impact on the performance of text classification. Sentences are usually segmented with words or subwords by a morphological analyzer or byte pair encoding and then encoded with word (or subword) representations for neural networks. However, segmentation is potentially ambiguous, and it is unclear whether the segmented tokens achieve the best performance for the target task. In this paper, we propose a method to simultaneously learn tokenization and text classification to address these problems. Our model incorporates a language model for unsupervised tokenization into a text classifier and then trains both models simultaneously. To make the model robust against infrequent tokens, we sampled segmentation for each sentence stochastically during training, which resulted in improved performance of text classification. We conducted experiments on sentiment analysis as a text classification task and show that our method achieves better performance than previous methods.","label_nlp4sg":0,"task":null,"method":null,"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"We are grateful to the members of the Computational Linguistics Laboratory, NAIST and the anonymous reviewers for their insightful comments.","year":2019,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} -{"ID":"wolf-sonkin-etal-2019-latin","url":"https:\/\/aclanthology.org\/W19-3114","title":"Latin script keyboards for South Asian languages with finite-state normalization","abstract":"The use of the Latin script for text entry of South Asian languages is common, even though there is no standard orthography for these languages in the script. We explore several compact finite-state architectures that permit variable spellings of words during mobile text entry. We find that approaches making use of transliteration transducers provide large accuracy improvements over baselines, but that simpler approaches involving a compact representation of many attested alternatives yields much of the accuracy gain. This is particularly important when operating under constraints on model size (e.g., on inexpensive mobile devices with limited storage and memory for keyboard models), and on speed of inference, since people typing on mobile keyboards expect no perceptual delay in keyboard responsiveness.","label_nlp4sg":0,"task":null,"method":null,"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2019,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} -{"ID":"mcdonald-1998-target","url":"https:\/\/aclanthology.org\/C98-2243","title":"Target Word Selection as Proximity in Semantic Space","abstract":"Lexical selection is a significant problem for widecoverage machine translation: depending on the context, a given source language word can often be translated into different target language words. In this paper I propose a method for target word selection that assumes the appropriate translation is more similar to the translated context than are the alternatives. Similarity of a word to a context is estimated using a proximity measure in corpusderived \"semantic space\". The method is evaluated using an English-Spanish parallel corpus of colloquial dialogue.","label_nlp4sg":0,"task":null,"method":null,"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"This work was supported by awards from NSERC Canada and the ORS scheme, and in part by ESRC grant #R000237419. Thanks to Chris Brew and Mirella Lapata for valuable comments.","year":1998,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} -{"ID":"amigo-etal-2005-qarla","url":"https:\/\/aclanthology.org\/P05-1035","title":"QARLA: A Framework for the Evaluation of Text Summarization Systems","abstract":"This paper presents a probabilistic framework, QARLA, for the evaluation of text summarisation systems. The input of the framework is a set of manual (reference) summaries, a set of baseline (automatic) summaries and a set of similarity metrics between summaries. It provides i) a measure to evaluate the quality of any set of similarity metrics, ii) a measure to evaluate the quality of a summary using an optimal set of similarity metrics, and iii) a measure to evaluate whether the set of baseline summaries is reliable or may produce biased results. Compared to previous approaches, our framework is able to combine different metrics and evaluate the quality of a set of metrics without any a-priori weighting of their relative importance. We provide quantitative evidence about the effectiveness of the approach to improve the automatic evaluation of text summarisation systems by combining several similarity metrics.","label_nlp4sg":0,"task":null,"method":null,"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"We are indebted to Ed Hovy, Donna Harman, Paul Over, Hoa Dang and Chin-Yew Lin for their inspiring and generous feedback at different stages in the development of QARLA. We are also indebted to NIST for hosting Enrique Amig\u00f3 as a visitor and for providing the DUC test beds. This work has been partially supported by the Spanish government, project R2D2 (TIC-2003-7180).","year":2005,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} -{"ID":"yamabana-etal-2000-lexicalized","url":"https:\/\/aclanthology.org\/C00-2134","title":"Lexicalized Tree Automata-based Grammars for Translating Conversational Texts","abstract":null,"label_nlp4sg":0,"task":null,"method":null,"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2000,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} -{"ID":"saetre-etal-2008-connecting","url":"http:\/\/www.lrec-conf.org\/proceedings\/lrec2008\/pdf\/442_paper.pdf","title":"Connecting Text Mining and Pathways using the PathText Resource","abstract":"Many systems have been developed in the past few years to assist researchers in the discovery of knowledge published as English text, for example in the PubMed database. At the same time, higher level collective knowledge is often published using a graphical notation representing all the entities in a pathway and their interactions. We believe that these pathway visualizations could serve as an effective user interface for knowledge discovery if they can be linked to the text in publications. Since the graphical elements in a Pathway are of a very different nature than their corresponding descriptions in English text, we developed a prototype system called PathText. The goal of PathText is to serve as a bridge between these two different representations. In this paper, we first describe the overall architecture and the interfaces of the PathText system, and then provide some details about the core Text Mining components.","label_nlp4sg":0,"task":null,"method":null,"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"This work was partially supported by \"Grant-in-Aid for Specially Promoted Research\" and the \"Genome Network Project\", both from the Ministry of Education, Culture, Sports, Science and Technology (MEXT), Japan. This work was also sponsored by Okinawa Institute of Science and Technology (OIST), Systems Biology Institute (SBI) and Sony Computer Science Laboratories, Inc. COLING-ACL 2006, pages 1017-1024, Sydney, Australia, July. Goran Nenadic, Naoki Okazaki, and Sophia Ananiadou. 2006. Towards a terminological resource for biomedical text mining. In Proceedings of LREC-5, Genoa, Italy, May. Chikashi Nobata, Philip Cotter, Naoaki Okazaki, Brian Rea, Yutaka Sasaki, Yoshimasa Tsuruoka, Jun'ichi Tsujii, and Sophia Ananiadou. 2008. Kleio: a knowledgeenriched information retrieval system for biology. In Proceedings of the ACM SIGIR Conference, July. Yoshimasa Tsuruoka and Jun'ichi Tsujii. 2004. Improving the performance of dictionary-based approaches in pro-","year":2008,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} -{"ID":"van-den-bosch-etal-2006-transferring","url":"http:\/\/www.lrec-conf.org\/proceedings\/lrec2006\/pdf\/167_pdf.pdf","title":"Transferring PoS-tagging and lemmatization tools from spoken to written Dutch corpus development","abstract":"We describe a case study in the reuse and transfer of tools in language resource development, from a corpus of spoken Dutch to a corpus of written Dutch. Once tools for a particular language have been developed, it is logical, but not trivial to reuse them for other types or registers of the language than the tools were originally designed for. This paper reviews the decisions and adaptations necessary to make this particular transfer from spoken to written language, focusing on a part-of-speech tagger and a lemmatizer. While the lemmatizer can be transferred fairly straightforwardly, the tagger needs to be adaptated considerably. We show how it can be adapted without starting from scratch. We describe how the part-of-speech tagset was adapted and how the tagger was retrained to deal with written-text phenomena it had not been trained on earlier.","label_nlp4sg":0,"task":null,"method":null,"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"This research is funded by STEVIN, a Dutch Language Union (Taalunie) programme 5 ), as part of the D-Coi (Dutch","year":2006,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} -{"ID":"matteson-etal-2018-rich","url":"https:\/\/aclanthology.org\/C18-1210","title":"Rich Character-Level Information for Korean Morphological Analysis and Part-of-Speech Tagging","abstract":"Due to the fact that Korean is a highly agglutinative, character-rich language, previous work on Korean morphological analysis typically employs the use of sub-character features known as graphemes or otherwise utilizes comprehensive prior linguistic knowledge (i.e., a dictionary of known morphological transformation forms, or actions). These models have been created with the assumption that character-level, dictionary-less morphological analysis was intractable due to the number of actions required. We present, in this study, a multi-stage action-based model that can perform morphological transformation and part-of-speech tagging using arbitrary units of input and apply it to the case of character-level Korean morphological analysis. Among models that do not employ prior linguistic knowledge, we achieve state-of-the-art word and sentence-level tagging accuracy with the Sejong Korean corpus using our proposed data-driven Bi-LSTM model.","label_nlp4sg":0,"task":null,"method":null,"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"This research was supported by the MSIT (Ministry of Science and ICT), South Korea, under the ITRC (Information Technology Research Center) support program (\"Research and Development of Human-Inspired Multiple Intelligence\") supervised by the IITP (Institute for Information & Communications Technology Promotion). Additionally, this work was supported by the National Research Foundation of Korea (NRF) grant funded by the South Korean government (MSIP) (No. NRF-2016R1A2B2015912).","year":2018,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} -{"ID":"rothe-etal-2020-leveraging","url":"https:\/\/aclanthology.org\/2020.tacl-1.18","title":"Leveraging Pre-trained Checkpoints for Sequence Generation Tasks","abstract":"Unsupervised pre-training of large neural models has recently revolutionized Natural Language Processing. By warm-starting from the publicly released checkpoints, NLP practitioners have pushed the state-of-the-art on multiple benchmarks while saving significant amounts of compute time. So far the focus has been mainly on the Natural Language Understanding tasks. In this paper, we demonstrate the efficacy of pre-trained checkpoints for Sequence Generation. We developed a Transformer-based sequence-to-sequence model that is compatible with publicly available pre-trained BERT, GPT-2, and RoBERTa checkpoints and conducted an extensive empirical study on the utility of initializing our model, both encoder and decoder, with these checkpoints. Our models result in new state-of-the-art results on Machine","label_nlp4sg":0,"task":null,"method":null,"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"We thank the reviewers and the action editor for their feedback. We would like to thank Ryan McDonald, Joshua Maynez, and Bernd Bohnet for useful discussions.","year":2020,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} -{"ID":"sundheim-1996-overview","url":"https:\/\/aclanthology.org\/X96-1048","title":"Overview of Results of the MUC-6 Evaluation","abstract":"The latest in a series of natural language processing system evaluations was concluded in October 1995 and was the topic of the Sixth Message Understanding Conference (MUC-6) in November. Participants were invited to enter their systems in as many as four different task-oriented evaluations. The Named Entity and Coreference tasks entailed\nStandard Generalized Markup Language (SGML) annotation of texts and were being conducted for the first time. The other two tasks, Template Element and Scenario Template, were information extraction tasks that followed on from the MUC evaluations conducted in previous years. The evolution and design of the MUC-6 evaluation are discussed in the paper by Grishman and Sundheim in this volume.","label_nlp4sg":0,"task":null,"method":null,"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"The definition and implementation of the evaluations reported on at the Message Understanding Conference was once again a \"community\" effort, requiring active involvement on the part of the evaluation participants as well as","year":1996,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} -{"ID":"mendes-etal-2012-evaluating","url":"http:\/\/www.lrec-conf.org\/proceedings\/lrec2012\/pdf\/545_Paper.pdf","title":"Evaluating the Impact of Phrase Recognition on Concept Tagging","abstract":"We have developed DBpedia Spotlight, a flexible concept tagging system that is able to tag-i.e. annotate-entities, topics and other terms in natural language text. The system starts by recognizing phrases to annotate in the input text, and subsequently disambiguates them to a reference knowledge base extracted from Wikipedia. In this paper we evaluate the impact of the phrase recognition step on the ability of the system to correctly reproduce the annotations of a gold standard in an unsupervised setting. We argue that a combination of techniques is needed, and we evaluate a number of alternatives according to an existing evaluation set.","label_nlp4sg":0,"task":null,"method":null,"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"The authors would like to thank Milo\u0161 Stanojevi\u0107 for the discussions that lead to the idea of applying Bloom filters in the NP L* implementation. This work was partially funded by the European Commission through the FP7 grant LOD2 -Creating Knowledge out of Interlinked Data (Grant No. 257943).","year":2012,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} -{"ID":"papadopoulou-2013-gf","url":"https:\/\/aclanthology.org\/R13-2019","title":"GF Modern Greek Resource Grammar","abstract":"The paper describes the Modern Greek (MG) Grammar, implemented in Grammatical Framework (GF) as part of the Grammatical Framework Resource Grammar Library (RGL). GF is a special-purpose language for multilingual grammar applications. The RGL is a reusable library for dealing with the morphology and syntax of a growing number of natural languages. It is based on the use of an abstract syntax, which is common for all languages, and different concrete syntaxes implemented in GF. Both GF itself and the RGL are open-source. RGL currently covers more than 30 languages. MG is the 35th language that is available in the RGL. For the purpose of the implementation, a morphologydriven approach was used, meaning a bottomup method, starting from the formation of words before moving to larger units (sentences). We discuss briefly the main characteristics and grammatical features of MG, and present some of the major difficulties we encountered during the process of implementation and how these are handled in the MG grammar.","label_nlp4sg":0,"task":null,"method":null,"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2013,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} -{"ID":"vylomova-etal-2016-take","url":"https:\/\/aclanthology.org\/P16-1158","title":"Take and Took, Gaggle and Goose, Book and Read: Evaluating the Utility of Vector Differences for Lexical Relation Learning","abstract":"Recent work has shown that simple vector subtraction over word embeddings is surprisingly effective at capturing different lexical relations, despite lacking explicit supervision. Prior work has evaluated this intriguing result using a word analogy prediction formulation and hand-selected relations, but the generality of the finding over a broader range of lexical relation types and different learning settings has not been evaluated. In this paper, we carry out such an evaluation in two learning settings: (1) spectral clustering to induce word relations, and (2) supervised learning to classify vector differences into relation types. We find that word embeddings capture a surprising amount of information, and that, under suitable supervised training, vector subtraction generalises well to a broad range of relations, including over unseen lexical items.","label_nlp4sg":0,"task":null,"method":null,"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"LR was supported by EPSRC grant EP\/I037512\/1 and ERC Starting Grant DisCoTex (306920). TC and TB were supported by the Australian Research Council.","year":2016,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} -{"ID":"nagasawa-etal-2021-validity","url":"https:\/\/aclanthology.org\/2021.maiworkshop-1.6","title":"Validity-Based Sampling and Smoothing Methods for Multiple Reference Image Captioning","abstract":"In image captioning, multiple captions are often provided as ground truths, since a valid caption is not always uniquely determined. Conventional methods randomly select a single caption and treat it as correct, but there have been few effective training methods that utilize multiple given captions. In this paper, we propose two training techniques for making effective use of multiple reference captions: 1) validity-based caption sampling (VBCS), which prioritizes the use of captions that are estimated to be highly valid during training, and 2) weighted caption smoothing (WCS), which applies smoothing only to the relevant words the reference caption to reflect multiple reference captions simultaneously. Experiments show that our proposed methods improve CIDEr by 2.6 points and BLEU4 by 0.9 points from baseline on the MSCOCO dataset.","label_nlp4sg":0,"task":null,"method":null,"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2021,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} -{"ID":"roberson-2019-automatic","url":"https:\/\/aclanthology.org\/W19-3623","title":"Automatic Product Categorization for Official Statistics","abstract":null,"label_nlp4sg":0,"task":null,"method":null,"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2019,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} -{"ID":"hafner-1985-semantics","url":"https:\/\/aclanthology.org\/P85-1001","title":"Semantics of Temporal Queries and Temporal Data","abstract":"This paper analyzes the requirements for adding a temporal reasoning component to a natural language database query system, and proposes a computational model that satisfies those requirements. A prelimInary implementation in Prolog is used to generate examples of the model's capabi Iltles.","label_nlp4sg":0,"task":null,"method":null,"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":1985,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} -{"ID":"ljubesic-etal-2015-predicting","url":"https:\/\/aclanthology.org\/R15-1049","title":"Predicting the Level of Text Standardness in User-generated Content","abstract":"Non-standard language as it appears in user-generated content has recently attracted much attention. This paper proposes that non-standardness comes in two basic varieties, technical and linguistic, and develops a machine-learning method to discriminate between standard and nonstandard texts in these two dimensions. We describe the manual annotation of a dataset of Slovene user-generated content and the features used to build our regression models. We evaluate and discuss the results, where the mean absolute error of the best performing method on a three-point scale is 0.38 for technical and 0.42 for linguistic standardness prediction. Even when using no language-dependent information sources, our predictor still outperforms an OOVratio baseline by a wide margin. In addition, we show that very little manually annotated training data is required to perform good prediction. Predicting standardness can help decide when to attempt to normalise the data to achieve better annotation results with standard tools, and provide linguists who are interested in nonstandard language with a simple way of selecting only such texts for their research.","label_nlp4sg":0,"task":null,"method":null,"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"The work described in this paper was funded by the Slovenian Research Agency, project J6-6842 and by the European Fund for Regional Development 2007 -2013 .","year":2015,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} -{"ID":"reiter-2019-natural","url":"https:\/\/aclanthology.org\/W19-8402","title":"Natural Language Generation Challenges for Explainable AI","abstract":"Good quality explanations of artificial intelligence (XAI) reasoning must be written (and evaluated) for an explanatory purpose, targeted towards their readers, have a good narrative and causal structure, and highlight where uncertainty and data quality affect the AI output. I discuss these challenges from a Natural Language Generation (NLG) perspective, and highlight four specific \"NLG for XAI\" research challenges.","label_nlp4sg":0,"task":null,"method":null,"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"This paper started off as a (much shorter) blog https:\/\/ehudreiter.com\/2019\/07\/ 19\/nlg-and-explainable-ai\/. My thanks to the people who commented on this blog, as well as the anonymous reviewers, the members of the Aberdeen CLAN research group, the members of the Explaining the Outcomes of Complex Models project at Monash, and the members of the NL4XAI research project, all of whom gave me excellent feedback and suggestions. My thanks also to Prof Ren\u00e9 van der Wal for his help in the experiment mentioned in section 3.","year":2019,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} -{"ID":"fernandez-gonzalez-gomez-rodriguez-2018-dynamic-oracle","url":"https:\/\/aclanthology.org\/N18-2062","title":"A Dynamic Oracle for Linear-Time 2-Planar Dependency Parsing","abstract":"We propose an efficient dynamic oracle for training the 2-Planar transition-based parser, a linear-time parser with over 99% coverage on non-projective syntactic corpora. This novel approach outperforms the static training strategy in the vast majority of languages tested and scored better on most datasets than the arc-hybrid parser enhanced with the Swap transition, which can handle unrestricted nonprojectivity.","label_nlp4sg":0,"task":null,"method":null,"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"This work has received funding from the European Research Council (ERC), under the European Union's Horizon 2020 research and innovation programme (FASTPARSE, grant agreement No 714150), from the TELEPARES-UDC project (FFI2014-51978-C2-2-R) and the ANSWER-ASAP project (TIN2017-85160-C2-1-R) from MINECO, and from Xunta de Galicia (ED431B 2017\/01).","year":2018,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} -{"ID":"forsbom-2009-extending","url":"https:\/\/aclanthology.org\/W09-4607","title":"Extending the View: Explorations in Bootstrapping a Swedish PoS Tagger","abstract":"State-of-the-art statistical part-of-speech taggers mainly use information on tag bi-or trigrams, depending on the size of the training corpus. Some also use lexical emission probabilities above unigrams with beneficial results. In both cases, a wider context usually gives better accuracy for a large training corpus, which in turn gives better accuracy than a smaller one. Large corpora with validated tags, however, are scarce, so a bootstrap technique can be used. As the corpus grows, it is probable that a widened context would improve results even further. In this paper, we looked at the contribution to accuracy of such an extended view for both tag transitions and lexical emissions, applied to both a validated Swedish source corpus and a raw bootstrap corpus. We found that the extended view was more important for tag transitions, in particular if applied to the bootstrap corpus. For lexical emission, it was also more important if applied to the bootstrap corpus than to the source corpus, although it was beneficial for both. The overall best tagger had an accuracy of 98.05%.","label_nlp4sg":0,"task":null,"method":null,"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"We would like to thank Anna S\u00e5gvall Hein and the anonymous reviewers for valuable comments, Eva Forsbom","year":2009,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} -{"ID":"portisch-etal-2020-kgvec2go","url":"https:\/\/aclanthology.org\/2020.lrec-1.692","title":"KGvec2go -- Knowledge Graph Embeddings as a Service","abstract":"In this paper, we present KGvec2go, a Web API for accessing and consuming graph embeddings in a lightweight fashion in downstream applications. Currently, we serve pre-trained embeddings for four knowledge graphs. We introduce the service and its usage, and we show further that the trained models have semantic value by evaluating them on multiple semantic benchmarks. The evaluation also reveals that the combination of multiple models can lead to a better outcome than the best individual model.","label_nlp4sg":0,"task":null,"method":null,"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2020,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} -{"ID":"vyas-pantel-2009-semi","url":"https:\/\/aclanthology.org\/N09-1033","title":"Semi-Automatic Entity Set Refinement","abstract":"State of the art set expansion algorithms produce varying quality expansions for different entity types. Even for the highest quality expansions, errors still occur and manual refinements are necessary for most practical uses. In this paper, we propose algorithms to aide this refinement process, greatly reducing the amount of manual labor required. The methods rely on the fact that most expansion errors are systematic, often stemming from the fact that some seed elements are ambiguous. Using our methods, empirical evidence shows that average R-precision over random entity sets improves by 26% to 51% when given from 5 to 10 manually tagged errors. Both proposed refinement models have linear time complexity in set size allowing for practical online use in set expansion systems.","label_nlp4sg":0,"task":null,"method":null,"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2009,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} -{"ID":"bhat-etal-2020-towards","url":"https:\/\/aclanthology.org\/2020.emnlp-main.675","title":"Towards Modeling Revision Requirements in wikiHow Instructions","abstract":"wikiHow is a resource of how-to guides that describe the steps necessary to accomplish a goal. Guides in this resource are regularly edited by a community of users, who try to improve instructions in terms of style, clarity and correctness. In this work, we test whether the need for such edits can be predicted automatically. For this task, we extend an existing resource of textual edits with a complementary set of approx. 4 million sentences that remain unedited over time and report on the outcome of two revision modeling experiments.","label_nlp4sg":0,"task":null,"method":null,"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"The research presented in this paper was funded by the DFG Emmy Noether program (RO 4848\/2-1).","year":2020,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} -{"ID":"falis-etal-2019-ontological","url":"https:\/\/aclanthology.org\/D19-6220","title":"Ontological attention ensembles for capturing semantic concepts in ICD code prediction from clinical text","abstract":"We present a semantically interpretable system for automated ICD coding of clinical text documents. Our contribution is an ontological attention mechanism which matches the structure of the ICD ontology, in which shared attention vectors are learned at each level of the hierarchy, and combined into label-dependent ensembles. Analysis of the attention heads shows that shared concepts are learned by the lowest common denominator node. This allows child nodes to focus on the differentiating concepts, leading to efficient learning and memory usage. Visualisation of the multilevel attention on the original text allows explanation of the code predictions according to the semantics of the ICD ontology. On the MIMIC-III dataset we achieve a 2.7% absolute (11% relative) improvement from 0.218 to 0.245 macro-F1 score compared to the previous state of the art across 3,912 codes. Finally, we analyse the labelling inconsistencies arising from different coding practices which limit performance on this task.","label_nlp4sg":1,"task":null,"method":null,"goal1":"Good Health and Well-Being","goal2":null,"goal3":null,"acknowledgments":null,"year":2019,"sdg1":0,"sdg2":0,"sdg3":1,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} -{"ID":"shaikh-etal-2008-linguistic","url":"https:\/\/aclanthology.org\/I08-2128","title":"Linguistic Interpretation of Emotions for Affect Sensing from Text","abstract":"Several approaches have already been employed to \"sense\" affective information from text, but none of those ever considered the cognitive and appraisal structure of individual emotions. Hence this paper aims at interpreting the cognitive theory of emotions known as the OCC emotion model, from a linguistic standpoint. The paper provides rules for the OCC emotion types for the task of sensing affective information from text. Since the OCC emotions are associated with several cognitive variables, we explain how the values could be assigned to those by analyzing and processing natural language components. Empirical results indicate that our system outperforms another state-of-the-art system.","label_nlp4sg":0,"task":null,"method":null,"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2008,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} -{"ID":"elgohary-carpuat-2016-learning","url":"https:\/\/aclanthology.org\/P16-2059","title":"Learning Monolingual Compositional Representations via Bilingual Supervision","abstract":"Bilingual models that capture the semantics of sentences are typically only evaluated on cross-lingual transfer tasks such as cross-lingual document categorization or machine translation. In this work, we evaluate the quality of the monolingual representations learned with a variant of the bilingual compositional model of Hermann and Blunsom (2014), when viewing translations in a second language as a semantic annotation as the original language text. We show that compositional objectives based on phrase translation pairs outperform compositional objectives based on bilingual sentences and on monolingual paraphrases.","label_nlp4sg":0,"task":null,"method":null,"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2016,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} -{"ID":"fukui-etal-2017-spectral","url":"https:\/\/aclanthology.org\/W17-2405","title":"Spectral Graph-Based Method of Multimodal Word Embedding","abstract":"In this paper, we propose a novel method for multimodal word embedding, which exploit a generalized framework of multiview spectral graph embedding to take into account visual appearances or scenes denoted by words in a corpus. We evaluated our method through word similarity tasks and a concept-to-image search task, having found that it provides word representations that reflect visual information, while somewhat trading-off the performance on the word similarity tasks. Moreover, we demonstrate that our method captures multimodal linguistic regularities, which enable recovering relational similarities between words and images by vector arithmetic.","label_nlp4sg":0,"task":null,"method":null,"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2017,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} -{"ID":"kaplan-etal-2002-adapting","url":"https:\/\/aclanthology.org\/W02-1506","title":"Adapting Existing Grammars: The XLE Experience","abstract":"We report on the XLE parser and grammar development platform (Maxwell and Kaplan, 1993) and describe how a basic Lexical Functional Grammar for English has been adapted to two different corpora (newspaper text and copier repair tips).","label_nlp4sg":0,"task":null,"method":null,"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2002,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} -{"ID":"saetre-etal-2009-protein","url":"https:\/\/aclanthology.org\/W09-1414","title":"From Protein-Protein Interaction to Molecular Event Extraction","abstract":"This document describes the methods and results for our participation in the BioNLP'09 Shared Task #1 on Event Extraction. It also contains some error analysis and a brief discussion of the results. Previous shared tasks in the BioNLP community have focused on extracting gene and protein names, and on finding (direct) protein-protein interactions (PPI). This year's task was slightly different, since the protein names were already manually annotated in the text. The new challenge was to extract biological events involving these given gene and gene products. We modified a publicly available system (AkanePPI) to apply it to this new, but similar, protein interaction task. AkanePPI has previously achieved state-of-the-art performance on all existing public PPI corpora, and only small changes were needed to achieve competitive results on this event extraction task. Our official result was an F-score of 36.9%, which was ranked as number six among submissions from 24 different groups. We later balanced the recall\/precision by including more predictions than just the most confident one in ambiguous cases, and this raised the F-score on the test-set to 42.6%. The new Akane program can be used freely for academic purposes.","label_nlp4sg":1,"task":null,"method":null,"goal1":"Good Health and Well-Being","goal2":null,"goal3":null,"acknowledgments":"\"Grant-in-Aid for Specially Promoted Research\" and \"Genome Network Project\", MEXT, Japan.","year":2009,"sdg1":0,"sdg2":0,"sdg3":1,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} -{"ID":"hromada-2013-random","url":"https:\/\/aclanthology.org\/R13-2012","title":"Random Projection and Geometrization of String Distance Metrics","abstract":"Edit distance is not the only approach how distance between two character sequences can be calculated. Strings can be also compared in somewhat subtler geometric ways. A procedure inspired by Random Indexing can attribute an D-dimensional geometric coordinate to any character N-gram present in the corpus and can subsequently represent the word as a sum of N-gram fragments which the string contains. Thus, any word can be described as a point in a dense N-dimensional space and the calculation of their distance can be realized by applying traditional Euclidean measures. Strong correlation exists, within the Keats Hyperion corpus, between such cosine measure and Levenshtein distance. Overlaps between the centroid of Levenshtein distance matrix space and centroids of vectors spaces generated by Random Projection were also observed. Contrary to standard non-random \"sparse\" method of measuring cosine distances between two strings, the method based on Random Projection tends to naturally promote not the shortest but rather longer strings. The geometric approach yields finer output range than Levenshtein distance and the retrieval of the nearest neighbor of text's centroid could have, due to limited dimensionality of Randomly Projected space, smaller complexity than other vector methods.","label_nlp4sg":0,"task":null,"method":null,"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"The author would like to thank Adil El-Ghali for introduction into Random Indexing as well as his comments concerning the present paper; to prof. Charles Tijus and doc. Ivan Sekaj for their support and to Aliancia Fair-Play for permission to execute some code on their servers.","year":2013,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} -{"ID":"vijayaraghavan-etal-2020-dapper","url":"https:\/\/aclanthology.org\/2020.aacl-main.65","title":"DAPPER: Learning Domain-Adapted Persona Representation Using Pretrained BERT and External Memory","abstract":"Research in building intelligent agents have emphasized the need for understanding characteristic behavior of people. In order to reflect human-like behavior, agents require the capability to comprehend the context, infer individualized persona patterns and incrementally learn from experience. In this paper, we present a model called DAPPER that can learn to embed persona from natural language and alleviate task or domain-specific data sparsity issues related to personas. To this end, we implement a text encoding strategy that leverages a pretrained language model and an external memory to produce domain-adapted persona representations. Further, we evaluate the transferability of these embeddings by simulating low-resource scenarios. Our comparative study demonstrates the capability of our method over other approaches towards learning rich transferable persona embeddings. Empirical evidence suggests that the learnt persona embeddings can be effective in downstream tasks like hate speech detection.","label_nlp4sg":0,"task":null,"method":null,"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2020,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} -{"ID":"setiawan-etal-2009-topological","url":"https:\/\/aclanthology.org\/P09-1037","title":"Topological Ordering of Function Words in Hierarchical Phrase-based Translation","abstract":"Hierarchical phrase-based models are attractive because they provide a consistent framework within which to characterize both local and long-distance reorderings, but they also make it dif cult to distinguish many implausible reorderings from those that are linguistically plausible. Rather than appealing to annotationdriven syntactic modeling, we address this problem by observing the in uential role of function words in determining syntactic structure, and introducing soft constraints on function word relationships as part of a standard log-linear hierarchical phrase-based model. Experimentation on Chinese-English and Arabic-English translation demonstrates that the approach yields signi cant gains in performance.","label_nlp4sg":0,"task":null,"method":null,"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"This research was supported in part by the ","year":2009,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} -{"ID":"wang-matthews-2008-species","url":"https:\/\/aclanthology.org\/W08-0610","title":"Species Disambiguation for Biomedical Term Identification","abstract":"An important task in information extraction (IE) from biomedical articles is term identification (TI), which concerns linking entity mentions (e.g., terms denoting proteins) in text to unambiguous identifiers in standard databases (e.g., RefSeq). Previous work on TI has focused on species-specific documents. However, biomedical documents, especially full-length articles, often talk about entities across a number of species, in which case resolving species ambiguity becomes an indispensable part of TI. This paper describes our rule-based and machine-learning based approaches to species disambiguation and demonstrates that performance of TI can be improved by over 20% if the correct species are known. We also show that using the species predicted by the automatic species taggers can improve TI by a large margin.","label_nlp4sg":1,"task":null,"method":null,"goal1":"Good Health and Well-Being","goal2":null,"goal3":null,"acknowledgments":" 17 We tested the TI system on the four original BioCreAtIvE GN datasets separately and the averaged performance was about the median among the participating systems in the workshops. We did not optimise the TXM TI system on BioCreAtIvE, as our point here is to measure the TI performance with or without help from the automatic predicted species.","year":2008,"sdg1":0,"sdg2":0,"sdg3":1,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} -{"ID":"martinovic-1994-universal","url":"https:\/\/aclanthology.org\/C94-2148","title":"Universal Guides and Finiteness and Symmetry of Grammar Processing Algorithms","abstract":"This paper presents a novel technique called \"universal guides\" which explores inherent properties of logic grammars (changing variable binding status) in order to characterize tbrmal criteria for termination in a derivation process. The notion of universal guides also offers a new framework in which both parsing and generation can be viewed merely as two different instances of the same generic process: guide consumption. This technique generalizes and exemplifies a new and original use of an existing concept of \"proper guides\" recently proposed in literature for controlling top-down left-to-right (TDLR) execution in logic progrmns. We show that universal guides are independent of a particular grammar evaluation strategy. Also, unlike proper guides they can be specified in the same mmmer for any given algorithm without knowing in advance whether the algorithm is a parsing or a generation algorithm. Their introduction into a grammar prevents as well the occurrence of certain grammar rules an infinite number of times dnring a derivation process.","label_nlp4sg":0,"task":null,"method":null,"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":1994,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} -{"ID":"suzuki-etal-2002-topic","url":"https:\/\/aclanthology.org\/C02-2012","title":"Topic Tracking using Subject Templates and Clustering Positive Training Instances","abstract":null,"label_nlp4sg":0,"task":null,"method":null,"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2002,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} -{"ID":"stoyanchev-etal-2008-exact","url":"https:\/\/aclanthology.org\/W08-1802","title":"Exact Phrases in Information Retrieval for Question Answering","abstract":"Question answering (QA) is the task of finding a concise answer to a natural language question. The first stage of QA involves information retrieval. Therefore, performance of an information retrieval subsystem serves as an upper bound for the performance of a QA system. In this work we use phrases automatically identified from questions as exact match constituents to search queries. Our results show an improvement over baseline on several document and sentence retrieval measures on the WEB dataset. We get a 20% relative improvement in MRR for sentence extraction on the WEB dataset when using automatically generated phrases and a further 9.5% relative improvement when using manually annotated phrases. Surprisingly, a separate experiment on the indexed AQUAINT dataset showed no effect on IR performance of using exact phrases.","label_nlp4sg":0,"task":null,"method":null,"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"We would like to thank professor Amanda Stent for suggestions about experiments and proofreading the paper. We would like to thank the reviewers for useful comments.","year":2008,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} -{"ID":"kiritchenko-mohammad-2016-capturing","url":"https:\/\/aclanthology.org\/N16-1095","title":"Capturing Reliable Fine-Grained Sentiment Associations by Crowdsourcing and Best--Worst Scaling","abstract":"Access to word-sentiment associations is useful for many applications, including sentiment analysis, stance detection, and linguistic analysis. However, manually assigning finegrained sentiment association scores to words has many challenges with respect to keeping annotations consistent. We apply the annotation technique of Best-Worst Scaling to obtain real-valued sentiment association scores for words and phrases in three different domains: general English, English Twitter, and Arabic Twitter. We show that on all three domains the ranking of words by sentiment remains remarkably consistent even when the annotation process is repeated with a different set of annotators. We also, for the first time, determine the minimum difference in sentiment association that is perceptible to native speakers of a language.","label_nlp4sg":0,"task":null,"method":null,"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2016,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} -{"ID":"lin-chen-2010-risk","url":"https:\/\/aclanthology.org\/P10-1009","title":"A Risk Minimization Framework for Extractive Speech Summarization","abstract":"In this paper, we formulate extractive summarization as a risk minimization problem and propose a unified probabilistic framework that naturally combines supervised and unsupervised summarization models to inherit their individual merits as well as to overcome their inherent limitations. In addition, the introduction of various loss functions also provides the summarization framework with a flexible but systematic way to render the redundancy and coherence relationships among sentences and between sentences and the whole document, respectively. Experiments on speech summarization show that the methods deduced from our framework are very competitive with existing summarization approaches.","label_nlp4sg":0,"task":null,"method":null,"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2010,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} -{"ID":"zhou-etal-2021-defense","url":"https:\/\/aclanthology.org\/2021.acl-long.426","title":"Defense against Synonym Substitution-based Adversarial Attacks via Dirichlet Neighborhood Ensemble","abstract":"Although deep neural networks have achieved prominent performance on many NLP tasks, they are vulnerable to adversarial examples. We propose Dirichlet Neighborhood Ensemble (DNE), a randomized method for training a robust model to defense synonym substitutionbased attacks. During training, DNE forms virtual sentences by sampling embedding vectors for each word in an input sentence from a convex hull spanned by the word and its synonyms, and it augments them with the training data. In such a way, the model is robust to adversarial attacks while maintaining the performance on the original clean data. DNE is agnostic to the network architectures and scales to large models (e.g., BERT) for NLP applications. Through extensive experimentation, we demonstrate that our method consistently outperforms recently proposed defense methods by a significant margin across different network architectures and multiple data sets.","label_nlp4sg":0,"task":null,"method":null,"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"This work was supported by Shanghai Municipal Science and Technology Major Project (No. 2021SHZDZX0103), National Science Foundation of China (No. 62076068) and Zhangjiang Lab.","year":2021,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} -{"ID":"lin-yu-2020-adaptive","url":"https:\/\/aclanthology.org\/2020.rocling-1.22","title":"An Adaptive Method for Building a Chinese Dimensional Sentiment Lexicon","abstract":"16\u300117\u300118]\uff0c\u4f7f\u7aef\u5230\u7aef(End-to-End)\u7684\u5012\u50b3\u905e(Back-propagation)\u904e\u7a0b\u4e2d\uff0c\u81ea\u52d5\u8abf\u6574\u795e\u7d93 \u5143\u7684\u53c3\u6578\u9054\u6210\u6700\u5c0f\u5316\u8aa4\u5dee\uff0c\u800c\u5728\u67b6\u69cb\u4e0a\u5247\u5927\u81f4\u53ef\u5206\u70ba\u7de8\u78bc\u5668(Encoder)\u53ca\u89e3\u78bc\u5668(Decoder) \u7b49 2 \u500b\u90e8\u5206\uff0c\u7de8\u78bc\u5668\u90e8\u5206\u8ca0\u8cac\u5f9e\u539f\u59cb\u8cc7\u6599\u4e2d\u8403\u53d6\u7279\u5fb5\uff0c\u89e3\u78bc\u5668\u5247\u8ca0\u8cac\u5f9e\u8403\u53d6\u5b8c\u6210\u7684\u7279\u5fb5 \u89e3 \u78bc \u70ba \u76ee \u6a19 \u503c \u3002 \u56e0 \u6df1 \u5ea6 \u5b78 \u7fd2 \u67b6 \u69cb \u5177 \u6709 \u7de8 \u78bc \u5668 \uff0c \u5176 \u900f \u904e \u6620 \u5c04 (Mapping) \u53ef \u4fdd \u7559 \u8868 \u5fb5 (Representation)\uff0c\u56e0\u6b64\u64c1\u6709\u512a\u7570\u7684\u8868\u5fb5\u5b78\u7fd2\u80fd\u529b[19\u300120\u300121]\u3002 \u5982\u8a5e\u5d4c\u5165(Word Embedding)[\nThe 32nd Conference on Computational Linguistics and Speech Processing (ROCLING 2020) Taipei, Taiwan, September 24-26, 2020. The Association for Computational Linguistics and Chinese Language Processing","label_nlp4sg":0,"task":null,"method":null,"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2020,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} -{"ID":"he-etal-2020-learning","url":"https:\/\/aclanthology.org\/2020.coling-main.106","title":"Learning Efficient Task-Specific Meta-Embeddings with Word Prisms","abstract":"Word embeddings are trained to predict word cooccurrence statistics, which leads them to possess different lexical properties (syntactic, semantic, etc.) depending on the notion of context defined at training time. These properties manifest when querying the embedding space for the most similar vectors, and when used at the input layer of deep neural networks trained to solve downstream NLP problems. Meta-embeddings combine multiple sets of differently trained word embeddings, and have been shown to successfully improve intrinsic and extrinsic performance over equivalent models which use just one set of source embeddings. We introduce word prisms: a simple and efficient meta-embedding method that learns to combine source embeddings according to the task at hand. Word prisms learn orthogonal transformations to linearly combine the input source embeddings, which allows them to be very efficient at inference time. We evaluate word prisms in comparison to other meta-embedding methods on six extrinsic evaluations and observe that word prisms offer improvements in performance on all tasks. 1 * Equal contribution. \u2020 This work was pursued prior to Kian's employment at BMO.","label_nlp4sg":0,"task":null,"method":null,"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"This work is supported by the Fonds de recherche du Qu\u00e9bec -Nature et technologies, by the Natural Sciences and Engineering Research Council of Canada, and by Compute Canada. The last author is supported in part by the Canada CIFAR AI Chair program.","year":2020,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} -{"ID":"li-etal-2007-semantic","url":"https:\/\/aclanthology.org\/P07-1016","title":"Semantic Transliteration of Personal Names","abstract":"Words of foreign origin are referred to as borrowed words or loanwords. A loanword is usually imported to Chinese by phonetic transliteration if a translation is not easily available. Semantic transliteration is seen as a good tradition in introducing foreign words to Chinese. Not only does it preserve how a word sounds in the source language, it also carries forward the word's original semantic attributes. This paper attempts to automate the semantic transliteration process for the first time. We conduct an inquiry into the feasibility of semantic transliteration and propose a probabilistic model for transliterating personal names in Latin script into Chinese. The results show that semantic transliteration substantially and consistently improves accuracy over phonetic transliteration in all the experiments.","label_nlp4sg":0,"task":null,"method":null,"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2007,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} -{"ID":"nabizadeh-etal-2020-myfixit","url":"https:\/\/aclanthology.org\/2020.lrec-1.260","title":"MyFixit: An Annotated Dataset, Annotation Tool, and Baseline Methods for Information Extraction from Repair Manuals","abstract":"Text instructions are among the most widely used media for learning and teaching. Hence, to create assistance systems that are capable of supporting humans autonomously in new tasks, it would be immensely productive, if machines were enabled to extract task knowledge from such text instructions. In this paper, we, therefore, focus on information extraction (IE) from the instructional text in repair manuals. This brings with it the multiple challenges of information extraction from the situated and technical language in relatively long and often complex instructions. To tackle these challenges, we introduce a semi-structured dataset of repair manuals. The dataset is annotated in a large category of devices, with information that we consider most valuable for an automated repair assistant, including the required tools and the disassembled parts at each step of the repair progress. We then propose methods that can serve as baselines for this IE task: an unsupervised method based on a bags-of-n-grams similarity for extracting the needed tools in each repair step, and a deep-learning-based sequence labeling model for extracting the identity of disassembled parts. These baseline methods are integrated into a semi-automatic web-based annotator application that is also available along with the dataset.","label_nlp4sg":0,"task":null,"method":null,"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2020,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} -{"ID":"volk-1997-probing","url":"https:\/\/aclanthology.org\/P97-1015","title":"Probing the Lexicon in Evaluating Commercial MT Systems","abstract":"In the past the evaluation of machine translation systems has focused on single system evaluations because there were only few systems available. But now there are several commercial systems for the same language pair. This requires new methods of comparative evaluation. In the paper we propose a black-box method for comparing the lexical coverage of MT systems. The method is based on lists of words from different frequency classes. It is shown how these word lists can be compiled and used for testing. We also present the results of using our method on 6 MT systems that translate between English and German.","label_nlp4sg":0,"task":null,"method":null,"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":1997,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} -{"ID":"johnson-riezler-2000-exploiting","url":"https:\/\/aclanthology.org\/A00-2021","title":"Exploiting auxiliary distributions in stochastic unification-based grammars","abstract":"This paper describes a method for estimating conditional probability distributions over the parses of \"unification-based\" grammars which can utilize auxiliary distributions that are estimated by other means. We show how this can be used to incorporate information about lexical selectional preferences gathered from other sources into Stochastic \"Unificationbased\" Grammars (SUBGs). While we apply this estimator to a Stochastic Lexical-Functional Grammar, the method is general, and should be applicable to stochastic versions of HPSGs, categorial grammars and transformational grammars.","label_nlp4sg":0,"task":null,"method":null,"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2000,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} -{"ID":"palmer-etal-2000-semantic","url":"http:\/\/www.lrec-conf.org\/proceedings\/lrec2000\/pdf\/197.pdf","title":"Semantic Tagging for the Penn Treebank","abstract":"This paper describes the methodology that is being used to augment the Penn Treebank annotation with sense tags and other types of semantic information. Inspired by the results of SENSEVAL, and the high inter-annotator agreement that was achieved there, similar methods were used for a pilot study of 5000 words of running text from the Penn Treebank. Using the same techniques of allowing the annotators to discuss difficult tagging cases and to revise WordNet entries if necessary, comparable inter-annotator rates have been achieved. The criteria for determining appropriate revisions and ensuring clear sense distinctions are described. We are also using hand correction of automatic predicate argument structure information to provide additional thematic role labeling.","label_nlp4sg":0,"task":null,"method":null,"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"This paper reports on work supported by NSF grant IIS-9800658.","year":2000,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} -{"ID":"wood-doughty-etal-2022-model","url":"https:\/\/aclanthology.org\/2022.bionlp-1.41","title":"Model Distillation for Faithful Explanations of Medical Code Predictions","abstract":"Machine learning models that offer excellent predictive performance often lack the interpretability necessary to support integrated human machine decision-making. In clinical or other high-risk settings, domain experts may be unwilling to trust model predictions without explanations. Work in explainable AI must balance competing objectives along two different axes: 1) Models should ideally be both accurate and simple. 2) Explanations must balance faithfulness to the model's decisionmaking with their plausibility to a domain expert. We propose to use knowledge distillation, or training a student model that mimics the behavior of a trained teacher model, as a technique to generate faithful and plausible explanations. We evaluate our approach on the task of assigning ICD codes to clinical notes to demonstrate that the student model is faithful to the teacher model's behavior and produces quality natural language explanations.","label_nlp4sg":1,"task":null,"method":null,"goal1":"Good Health and Well-Being","goal2":null,"goal3":null,"acknowledgments":"We acknowledge support provided by the Johns Hopkins Institute for Assured Autonomy. We thank Sarah Wiegreffe and Jacob Eisenstein for their help and plausibility annotations.","year":2022,"sdg1":0,"sdg2":0,"sdg3":1,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} -{"ID":"croce-etal-2019-auditing","url":"https:\/\/aclanthology.org\/D19-1415","title":"Auditing Deep Learning processes through Kernel-based Explanatory Models","abstract":"While NLP systems become more pervasive, their accountability gains value as a focal point of effort. Epistemological opaqueness of nonlinear learning methods, such as deep learning models, can be a major drawback for their adoptions. In this paper, we discuss the application of Layerwise Relevance Propagation over a linguistically motivated neural architecture, the Kernel-based Deep Architecture, in order to trace back connections between linguistic properties of input instances and system decisions. Such connections then guide the construction of argumentations on the network's inferences, i.e., explanations based on real examples that are semantically related to the input. We also propose here a methodology to evaluate the transparency and coherence of analogy-based explanations modeling an audit stage for the system. Quantitative analysis on two semantic tasks, i.e., question classification and semantic role labeling, shows that the explanatory capabilities (native in KDAs) are effective and they pave the way to more complex argumentation methods.","label_nlp4sg":0,"task":null,"method":null,"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2019,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} -{"ID":"mohri-etal-2004-statistical","url":"https:\/\/aclanthology.org\/P04-1008","title":"Statistical Modeling for Unit Selection in Speech Synthesis","abstract":"Traditional concatenative speech synthesis systems use a number of heuristics to define the target and concatenation costs, essential for the design of the unit selection component. In contrast to these approaches, we introduce a general statistical modeling framework for unit selection inspired by automatic speech recognition. Given appropriate data, techniques based on that framework can result in a more accurate unit selection, thereby improving the general quality of a speech synthesizer. They can also lead to a more modular and a substantially more efficient system. We present a new unit selection system based on statistical modeling. To overcome the original absence of data, we use an existing high-quality unit selection system to generate a corpus of unit sequences. We show that the concatenation cost can be accurately estimated from this corpus using a statistical n-gram language model over units. We used weighted automata and transducers for the representation of the components of the system and designed a new and more efficient composition algorithm making use of string potentials for their combination. The resulting statistical unit selection is shown to be about 2.6 times faster than the last release of the AT&T Natural Voices Product while preserving the same quality, and offers much flexibility for the use and integration of new and more complex components.","label_nlp4sg":0,"task":null,"method":null,"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"We thank Mark Beutnagel for helping us clarify some of the details of the unit selection system in the AT&T Natural Voices Product speech synthesizer. Mark also generated the training corpora and set up the listening test used in our experiments.We also acknowledge discussions with Brian Roark about various statistical language modeling topics in the context of unit selection.","year":2004,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} -{"ID":"clark-gardner-2018-simple","url":"https:\/\/aclanthology.org\/P18-1078","title":"Simple and Effective Multi-Paragraph Reading Comprehension","abstract":"We introduce a method of adapting neural paragraph-level question answering models to the case where entire documents are given as input. Most current question answering models cannot scale to document or multi-document input, and naively applying these models to each paragraph independently often results in them being distracted by irrelevant text. We show that it is possible to significantly improve performance by using a modified training scheme that teaches the model to ignore non-answer containing paragraphs. Our method involves sampling multiple paragraphs from each document, and using an objective function that requires the model to produce globally correct output. We additionally identify and improve upon a number of other design decisions that arise when working with document-level data. Experiments on TriviaQA and SQuAD shows our method advances the state of the art, including a 10 point gain on TriviaQA.","label_nlp4sg":0,"task":null,"method":null,"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2018,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} -{"ID":"barbella-forbus-2010-analogical","url":"https:\/\/aclanthology.org\/W10-0912","title":"Analogical Dialogue Acts: Supporting Learning by Reading Analogies","abstract":"Analogy is heavily used in written explanations, particularly in instructional texts. We introduce the concept of analogical dialogue acts (ADAs) which represent the roles utterances play in instructional analogies. We describe a catalog of such acts, based on ideas from structure-mapping theory. We focus on the operations that these acts lead to while understanding instructional texts, using the Structure-Mapping Engine (SME) and dynamic case construction in a computational model. We test this model on a small corpus of instructional analogies, expressed in simplified English, which were understood via a semiautomatic natural language system using analogical dialogue acts. The model enabled a system to answer questions after understanding the analogies that it was not able to answer without them.","label_nlp4sg":0,"task":null,"method":null,"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"This research was supported by the Intelligent and Autonomous Systems Program of the Office of Naval Research.","year":2010,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} -{"ID":"sundheim-1991-third","url":"https:\/\/aclanthology.org\/H91-1059","title":"Third Message Understanding Evaluation and Conference (MUC-3): Phase 1 Status Report","abstract":"The Naval Ocean Systems Center is conducting the third in a series of evaluations of English text analysis systems. The premise on which the evaluations are based is that task-oriented tests enable straightforward comparisons among systems and provide useful quantitative data on the state of the art in text understanding. Furthermore, the data can be interpreted in light of information known about each system's text analysis techniques in order to yield qualitative insights into the relative validity of those techniques as applied to the general problem of information extraction. A dry-run phase of the third evaluation was completed in February, 1991, and the official testing will be done in May, 1991, concluding with the Third Message Understanding Conference (MUC-3). Twelve sites reported results for the dryrun test at a meeting held in February, 1991. All systems are being evaluated on the basis of performance on the information extraction task in a blind test at the end of each phase of the evaluation.","label_nlp4sg":0,"task":null,"method":null,"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"The author is indebted to all the organizations participating in MUC-3 and to certain individuals in particular who have contributed extra time and energy to ensure the evaluation's success, among them Laura Balcom, Scan Boisen, Nancy Chinchor, Ralph Grishman, Pete Halverson, Jerry Hobbs, Cheryl Kariya, George Krupka, David Lewis, Lisa Rau, John Sterling, Charles Wayne, and Carl Weir.","year":1991,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} -{"ID":"kementchedjhieva-etal-2021-john","url":"https:\/\/aclanthology.org\/2021.findings-acl.429","title":"John praised Mary because \\_he\\_? Implicit Causality Bias and Its Interaction with Explicit Cues in LMs","abstract":"Some interpersonal verbs can implicitly attribute causality to either their subject or their object and are therefore said to carry an implicit causality (IC) bias. Through this bias, causal links can be inferred from a narrative, aiding language comprehension. We investigate whether pre-trained language models (PLMs) encode IC bias and use it at inference time. We find that to be the case, albeit to different degrees, for three distinct PLM architectures. However, causes do not always need to be implicit-when a cause is explicitly stated in a subordinate clause, an incongruent IC bias associated with the verb in the main clause leads to a delay in human processing. We hypothesize that the temporary challenge humans face in integrating the two contradicting signals, one from the lexical semantics of the verb, one from the sentence-level semantics, would be reflected in higher error rates for models on tasks dependent on causal links. The results of our study lend support to this hypothesis, suggesting that PLMs tend to prioritize lexical patterns over higher-order signals.","label_nlp4sg":0,"task":null,"method":null,"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"We thank Daniel Hershcovich, Ana Valeria Gonz\u00e1lez, Emanuele Bugliarello, and Mareike Hartmann for feedback on the drafts of this paper. We thank Desmond Elliott, Stella Frank and Dustin Wright, and Mareike Hartmann for their help with the annotation of the newly developed stimuli.Yova was funded by Innovation Fund Denmark, under the AutoML4CS project. Mark received funding from the European Research Council (ERC) under the European Union's Horizon 2020 research and innovation programme (FAST-PARSE, grant agreement No 714150) and from the Centro de Investigaci\u00f3n de Galicia (CITIC) which is funded by the Xunta de Galicia and the European Union (ERDF -Galicia 2014-2020 Program) by grant ED431G 2019\/01.","year":2021,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} -{"ID":"okada-miura-1982-conceptual","url":"https:\/\/aclanthology.org\/C82-2051","title":"Conceptual Taxonomy of Japanese Adjectives for Understanding Natural Language and Picture Patterns","abstract":"This paper presents a conceptual taxonomy of Japanese adjectives, succeeding that on Japanese verbs'. In this taxo-n~ny, natural language is associated with real world things --matter, events, attributes -and mental activities -spiritual and sensual. Adjective concepts are divided into two large classes, simple and non-simple. Simple concepts cannot be reduced into further elementary adjective concepts, whereas non-simple ones can be. Roughly speaking, simple concepts are concrete and can be directly associated with physical and mental attributes, whereas non-simple ones are abstract and indirectly associated with them.\nVerb concepts were well understood as \"change\" fro~ state S O to state S 1 as shown in Fig. 14 Adjective concepts are considered to be captured as the \"difference\" between objects O O and 0 I. Ylg.2 shows how the difference in vertical length between 00 and 01 brings about the concept of \"high\". Notice that surface structures often lack the expression of 00 like \"yama-ga takai (the mountain is high)\". Since the meaning of \"high\" cannot be expressed only by O 1, deep structures need O 0 as an object for comparison. otoko-ga ie-kara deru.","label_nlp4sg":0,"task":null,"method":null,"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":1982,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} -{"ID":"tanaka-2002-measuring","url":"https:\/\/aclanthology.org\/C02-1065","title":"Measuring the Similarity between Compound Nouns in Different Languages Using Non-Parallel Corpora","abstract":"This paper presents a method that measures the similarity between compound nouns in different languages to locate translation equivalents from corpora. The method uses information from unrelated corpora in different languages that do not have to be parallel. This means that many corpora can be used. The method compares the contexts of target compound nouns and translation candidates in the word or semantic attribute level. In this paper, we show how this measuring method can be applied to select the best English translation candidate for Japanese compound nouns in more than 70% of the cases.","label_nlp4sg":0,"task":null,"method":null,"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"This research was supported in part by the Research Collaboration between NTT Communication Science Laboratories, Nippon Telegraph and Telephone Corporation and CSLI, Stanford University. The author would like to thank Timothy Baldwin of CSLI and Francis Bond of NTT for their valuable comments.","year":2002,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} -{"ID":"wei-etal-2009-co","url":"https:\/\/aclanthology.org\/P09-2030","title":"Co-Feedback Ranking for Query-Focused Summarization","abstract":"In this paper, we propose a novel ranking framework-Co-Feedback Ranking (Co-FRank), which allows two base rankers to supervise each other during the ranking process by providing their own ranking results as feedback to the other parties so as to boost the ranking performance. The mutual ranking refinement process continues until the two base rankers cannot learn from each other any more. The overall performance is improved by the enhancement of the base rankers through the mutual learning mechanism. We apply this framework to the sentence ranking problem in query-focused summarization and evaluate its effectiveness on the DUC 2005 data set. The results are promising.","label_nlp4sg":0,"task":null,"method":null,"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"The work described in this paper was supported by the Hong Kong Polytechnic University internal the grants (G-YG80 and G-YH53) and the China NSF grant (60703008).","year":2009,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} -{"ID":"kroll-etal-2014-study","url":"https:\/\/aclanthology.org\/W14-6006","title":"A Study of Scientific Writing: Comparing Theoretical Guidelines with Practical Implementation","abstract":"Good scientific writing is a skill researchers seek to acquire. Textbook literature provides guidelines to improve scientific writing, for instance, \"use active voice when describing your own work\". In this paper we investigate to what extent researchers adhere to textbook principles in their articles. In our analyses we examine a set of selected principles which (i) are general and (ii) verifiable by applying text mining and natural language processing techniques. We develop a framework to automatically analyse a large data set containing \u223c14.000 scientific articles received from Mendeley and PubMed. We are interested in whether adhering to writing principles is related to scientific quality, scientific domain or gender and whether these relations change over time. Our results show (i) a clear relation between journal quality and scientific imprecision, i.e. journals with low impact factors exhibit higher numbers of imprecision indicators such as number of citation bunches and number of relativating words and (ii) that writing style partly depends on domain characteristics and preferences.","label_nlp4sg":0,"task":null,"method":null,"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"We thank Mendeley for providing the data set as well as Werner Klieber for crawling the PubMed data set. The presented work was developed within the CODE project funded by the EU FP7 (grant no. 296150). The Know-Center is funded within the Austrian COMET Program -Competence Centers for Excellent Technologies -under the auspices of the Austrian Federal Ministry of Transport, Innovation and Technology, the Austrian Federal Ministry of Economy, Family and Youth and by the State of Styria. COMET is managed by the Austrian Research Promotion Agency FFG.","year":2014,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} -{"ID":"horne-etal-2020-grubert","url":"https:\/\/aclanthology.org\/2020.aacl-srw.19","title":"GRUBERT: A GRU-Based Method to Fuse BERT Hidden Layers for Twitter Sentiment Analysis","abstract":"In this work, we introduce a GRU-based architecture called GRUBERT that learns to map the different BERT hidden layers to fused embeddings with the aim of achieving high accuracy on the Twitter sentiment analysis task. Tweets are known for their highly diverse language, and by exploiting different linguistic information present across BERT hidden layers, we can capture the full extent of this language at the embedding level. Our method can be easily adapted to other embeddings capturing different linguistic information. We show that our method outperforms well-known heuristics of using BERT (e.g. using only the last layer) and other embeddings such as ELMo. We observe potential label noise resulting from the data acquisition process and employ early stopping as well as a voting classifier to overcome it.","label_nlp4sg":0,"task":null,"method":null,"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"We thank the Data Analytics Lab at ETH Zurich for providing computing infrastructure. We also thank them, in addition to our mentor Shuhei Kurita and the anonymous reviewers, for valuable feedback.","year":2020,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} -{"ID":"lu-roth-2015-joint","url":"https:\/\/aclanthology.org\/D15-1102","title":"Joint Mention Extraction and Classification with Mention Hypergraphs","abstract":"We present a novel model for the task of joint mention extraction and classification. Unlike existing approaches, our model is able to effectively capture overlapping mentions with unbounded lengths. The model is highly scalable, with a time complexity that is linear in the number of words in the input sentence and linear in the number of possible mention classes. Our model can be extended to additionally capture mention heads explicitly in a joint manner under the same time complexity. We demonstrate the effectiveness of our model through extensive experiments on standard datasets.","label_nlp4sg":0,"task":null,"method":null,"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"We would like to thank Kian Ming A. Chai, Hai Leong Chieu and the three anonymous reviewers for their comments on this work. This work is supported by Temasek Lab of Singapore University of Technology and Design project IGDSS1403011 and IGDST1403013, and is partly supported by DARPA (under agreement number FA8750-13-2-0008).","year":2015,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} -{"ID":"jansen-ustalov-2019-textgraphs","url":"https:\/\/aclanthology.org\/D19-5309","title":"TextGraphs 2019 Shared Task on Multi-Hop Inference for Explanation Regeneration","abstract":"While automated question answering systems are increasingly able to retrieve answers to natural language questions, their ability to generate detailed human-readable explanations for their answers is still quite limited. The Shared Task on Multi-Hop Inference for Explanation Regeneration tasks participants with regenerating detailed gold explanations for standardized elementary science exam questions by selecting facts from a knowledge base of semistructured tables. Each explanation contains between 1 and 16 interconnected facts that form an \"explanation graph\" spanning core scientific knowledge and detailed world knowledge. It is expected that successfully combining these facts to generate detailed explanations will require advancing methods in multihop inference and information combination, and will make use of the supervised training data provided by the WorldTree explanation corpus. The top-performing system achieved a mean average precision (MAP) of 0.56, substantially advancing the state-of-the-art over a baseline information retrieval model. Detailed extended analyses of all submitted systems showed large relative improvements in accessing the most challenging multi-hop inference problems, while absolute performance remains low, highlighting the difficulty of generating detailed explanations through multihop reasoning.","label_nlp4sg":0,"task":null,"method":null,"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"who were funded by the Allen Institute for Artificial Intelligence (AI2). Peter Jansen's work on the shared task was supported by National Science Foundation (NSF Award #1815948, \"Explainable Natural Language Inference\"). Dmitry Ustalov's work on the shared task at the University of Mannheim was supported by the Deutsche Forschungsgemeinschaft (DFG) foundation under the \"JOIN-T\" project.","year":2019,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} -{"ID":"engelbrecht-schultz-2005-rapid","url":"https:\/\/aclanthology.org\/2005.iwslt-1.22","title":"Rapid Development of an Afrikaans English Speech-to-Speech Translator","abstract":"In this paper we investigate the rapid deployment of a twoway Afrikaans to English Speech-to-Speech Translation system. We discuss the approaches and amount of work involved to port a system to a new language pair, i.e. the steps required to rapidly adapt ASR, MT and TTS component to Afrikaans under limited time and data constraints. The resulting system represents the first prototype built for Afrikaans to English speech translation.","label_nlp4sg":0,"task":null,"method":null,"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"The authors wish to thank the following persons for their contributions: Paisarn Charoenpornsawat, Alan Black, Matthias Eck, Bing Zhao, Szu-Chen Jou, Susanne Burger and Thomas Schaaf.","year":2005,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} -{"ID":"verhagen-2010-brandeis","url":"http:\/\/www.lrec-conf.org\/proceedings\/lrec2010\/pdf\/740_Paper.pdf","title":"The Brandeis Annotation Tool","abstract":"The Brandeis Annotation Tool is a web-based text annotation tool that is centered around the notions of layered annotation and task decomposition. It allows annotations to refer to other annotations and to take a complicated task and split it into easier subtasks. The web-interface connects annotators to a central repository for all data and simplifies many of the housekeeping tasks while keeping requirements at a minimum (that is, users only need an internet connection and a well-behaved browser). BAT has been used mainly for temporal annotation, but can be considered a more general tool for several kinds of textual annotation.","label_nlp4sg":0,"task":null,"method":null,"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2010,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} -{"ID":"silverman-etal-1992-towards","url":"https:\/\/aclanthology.org\/H92-1088","title":"Towards Using Prosody in Speech Recognition\/Understanding Systems: Differences Between Read and Spontaneous Speech","abstract":"A persistent problem for keyword-driven speech recognition systems is that users often embed the to-be-recognized words or phrases in longer utterances. The recognizer needs to locate the relevant sections of the speech signal and ignore extraneous words. Prosody might provide an extra source of information to help locate target words embedded in other speech. In this paper we examine some prosodic characteristics of 160 such utterances and compare matched read and spontaneous versions. Half of the utterances are from a corpus of spontaneous answers to requests for the name of a city, recorded from calls to Directory Assistance Operators. The other half are the same word strings read by volunteers attempting to model the real dialogue. Results show a consistent pattern across both sets of data: embedded city names almost always bear nuclear pitch accents and are in their own intonational phrases. However the distributions of tonal make-up of these prosodic features differ markedly in read versus spontaneous speech, implying that if algorithms that exploit these prosodic regularities are trained on read speech, then the probabilities are likely to be incorrect models of real user speech.","label_nlp4sg":0,"task":null,"method":null,"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"Sheri Walzman learned prosodic transcription and labored long doing careful labelling. Lisa Russell developed the automated recording facility, helped find suitable volunteers, and imposed organization and order on the data collection effort. Without the help of these two people this work would never have seen the light of day. Any abuses of their work nevertheless remain our own responsibility.","year":1992,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} -{"ID":"wu-wang-2019-ji","url":"https:\/\/aclanthology.org\/2019.rocling-1.7","title":"\u57fa\u65bcBERT\u6a21\u578b\u4e4b\u591a\u570b\u8a9e\u8a00\u6a5f\u5668\u95b1\u8b80\u7406\u89e3\u7814\u7a76(Multilingual Machine Reading Comprehension based on BERT Model)","abstract":"In recent years, Internet provides more and more information for people in daily life. Due to the limitation of information retrieval techniques, information retrieved might not be related and helpful for users. Two ","label_nlp4sg":0,"task":null,"method":null,"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2019,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} -{"ID":"sagawa-etal-1994-parser","url":"https:\/\/aclanthology.org\/C94-1098","title":"A Parser Coping With Self-Repaired Japanese Utterances and Large Corpus-Based Evaluation","abstract":"Self-repair (Levelt 1988 ) is a repair of utterance by speaker him\/herself. A truman speaker makes self-repairs very frequently in spontaneous speedt. (Blackmer and Mitton 1991) reported that self-repairs are made once every 4.8 seconds in dialogues taken fi'om radio talk shows.\nSelf-repair is one ldnd of \"permissible illformedness\", that is a human listener can feel ill-formedness in it hut he\/she is able to recognize its intended meaning. Thus your partner does not need to interrupt dialogue.","label_nlp4sg":0,"task":null,"method":null,"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":1994,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} -{"ID":"xie-etal-2021-importance","url":"https:\/\/aclanthology.org\/2021.acl-long.445","title":"Importance-based Neuron Allocation for Multilingual Neural Machine Translation","abstract":"Multilingual neural machine translation with a single model has drawn much attention due to its capability to deal with multiple languages. However, the current multilingual translation paradigm often makes the model tend to preserve the general knowledge, but ignore the language-specific knowledge. Some previous works try to solve this problem by adding various kinds of language-specific modules to the model, but they suffer from the parameter explosion problem and require specialized manual design. To solve these problems, we propose to divide the model neurons into general and language-specific parts based on their importance across languages. The general part is responsible for preserving the general knowledge and participating in the translation of all the languages, while the language-specific part is responsible for preserving the languagespecific knowledge and participating in the translation of some specific languages. Experimental results on several language pairs, covering IWSLT and Europarl corpus datasets, demonstrate the effectiveness and universality of the proposed method.","label_nlp4sg":0,"task":null,"method":null,"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"We thank all the anonymous reviewers for their insightful and valuable comments. This work was supported by National Key R&D Program of China (NO. 2017YFE0192900).","year":2021,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} -{"ID":"furuse-1994-transfer","url":"https:\/\/aclanthology.org\/1994.amta-1.32","title":"Transfer-Driven Machine Translation","abstract":"Transfer-Driven Machine Translation (TDMT) [1, 2] is a translation technique developed as a research project at ATR Interpreting Telecommunications Research Laboratories. In TDMT, translation is performed mainly by a transfer module which applies transfer knowledge to an input sentence. Other modules, such as lexical processing, analysis, contextual processing and generation, cooperate with the transfer module to improve translation performance. This transfer-centered mechanism can achieve efficient and robust translation by making the most of the example-based framework, which calculates a semantic distance between linguistic expressions. A TDMT prototype system is written in LISP and is demonstrated on a SUN workstation. In our TDMT demonstration, the following items are presented.","label_nlp4sg":0,"task":null,"method":null,"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":1994,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} -{"ID":"popowich-1985-saumer","url":"https:\/\/aclanthology.org\/E85-1007","title":"SAUMER: Sentence Analysis Using Metarules","abstract":"The SAUMER system uses specifications of natural language grammars, which consist of rules and metarules. to provide a semantic interpretation of an input sentence. The SAUMER ' Specification Language (SSL) is a programming language which combin~ some of the features of generalised phrase structure grammars (Gazdar. 1981). like the correspondence between syntactic and semantic rules, with definite clause grammars (DCC-s) (Pereira and Warren. 1980) to create an executable grammar specification. SSL rules are similar to DCG rules except that they contain a semantic component and may also be left recursive. Metarules are used to generate new rules trom existing rules before any parsing is attempted. A.n implementation is tested which can provide semantic interpretations for sentences containing tepicalisation, relative clauses, passivisation, and questions. 111 should also be noted that. due Io the separabili'~y of the semantic component from \",he grammar rule, \u2022 different semantic notation could easily be introduced at long as ~u~ app~priate ~.mantic proce~in8 rou~dne$ were replaced. The use of SAUMER with \"an \"Al-adap'md\" version of Mon~ue's Intensional Logic\" is being examined by Fawc\u00a9It (1984),","label_nlp4sg":0,"task":null,"method":null,"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"l would like to thank Nick Cercone for reading an earlier version of this paper and providing some useful suggestions.The comments of the referees were also helpful. Facilities for this research were provided by the Laboratory for Computer and Communications Research. ","year":1985,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} -{"ID":"klie-etal-2021-human","url":"https:\/\/aclanthology.org\/2021.dash-1.6","title":"Human-In-The-LoopEntity Linking for Low Resource Domains","abstract":"Entity linking (EL) is concerned with disambiguating entity mentions in a text against a knowledge base (KB). To quickly annotate texts with EL in low-resource domains and noisy text, we present a novel Human-In-The-Loop EL approach. We show that it greatly outperforms a strong baseline in simulation. In a user study, annotation time is reduced by 35 % compared to annotating without interactive support; users report that they strongly prefer our new approach. An open-source and readyto-use implementation based on the text annotation platform INCEpTION 1 is made available 2 .","label_nlp4sg":0,"task":null,"method":null,"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2021,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} -{"ID":"ozdowska-2008-cross","url":"http:\/\/www.lrec-conf.org\/proceedings\/lrec2008\/pdf\/207_paper.pdf","title":"Cross-Corpus Evaluation of Word Alignment","abstract":"We present the procedures we implemented to carry out system oriented evaluation of a syntax-based word aligner-ALIBI. We take the approach of regarding cross-corpus evaluation as part of system oriented evaluation assuming that corpus type may impact alignment performance. We test our system on three English-French parallel corpora. The evaluation procedures include the creation of a reference set with multiple annotations of the same data for each corpus, the assessment of inter-annotator agreement rates and an analysis of the reference sets. We show that alignment performance varies across corpora according to the multiple references produced and further motivate our choice of preserving all reference annotations without solving disagreements between annotators.","label_nlp4sg":0,"task":null,"method":null,"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"Thanks to Science Foundation Ireland (http:\/\/www. sfi.ie) Principal Investigator Award 05\/IN\/1732 for part-funding this research.","year":2008,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} -{"ID":"afantenos-etal-2010-learning","url":"http:\/\/www.lrec-conf.org\/proceedings\/lrec2010\/pdf\/582_Paper.pdf","title":"Learning Recursive Segments for Discourse Parsing","abstract":"Automatically detecting discourse segments is an important preliminary step towards full discourse parsing. Previous research on discourse segmentation have relied on the assumption that elementary discourse units (EDUs) in a document always form a linear sequence (i.e., they can never be nested). Unfortunately, this assumption turns out to be too strong, for some theories of discourse like SDRT allows for nested discourse units. In this paper, we present a simple approach to discourse segmentation that is able to produce nested EDUs. Our approach builds on standard multi-class classification techniques combined with a simple repairing heuristic that enforces global coherence. Our system was developed and evaluated on the first round of annotations provided by the French Annodis project (an ongoing effort to create a discourse bank for French). Cross-validated on only 47 documents (1, 445 EDUs), our system achieves encouraging performance results with an F-score of 73% for finding EDUs.","label_nlp4sg":0,"task":null,"method":null,"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2010,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} -{"ID":"yu-jiang-2015-hassle","url":"https:\/\/aclanthology.org\/P15-2028","title":"A Hassle-Free Unsupervised Domain Adaptation Method Using Instance Similarity Features","abstract":"We present a simple yet effective unsupervised domain adaptation method that can be generally applied for different NLP tasks. Our method uses unlabeled target domain instances to induce a set of instance similarity features. These features are then combined with the original features to represent labeled source domain instances. Using three NLP tasks, we show that our method consistently outperforms a few baselines, including SCL, an existing general unsupervised domain adaptation method widely used in NLP. More importantly, our method is very easy to implement and incurs much less computational cost than SCL.","label_nlp4sg":0,"task":null,"method":null,"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"We would like to thank the reviewers for their valuable comments.","year":2015,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} -{"ID":"wu-etal-2018-word","url":"https:\/\/aclanthology.org\/D18-1482","title":"Word Mover's Embedding: From Word2Vec to Document Embedding","abstract":"While the celebrated Word2Vec technique yields semantically rich representations for individual words, there has been relatively less success in extending to generate unsupervised sentences or documents embeddings. Recent work has demonstrated that a distance measure between documents called Word Mover's Distance (WMD) that aligns semantically similar words, yields unprecedented KNN classification accuracy. However, WMD is expensive to compute, and it is hard to extend its use beyond a KNN classifier. In this paper, we propose the Word Mover's Embedding (WME), a novel approach to building an unsupervised document (sentence) embedding from pre-trained word embeddings. In our experiments on 9 benchmark text classification datasets and 22 textual similarity tasks, the proposed technique consistently matches or outperforms state-of-the-art techniques, with significantly higher accuracy on problems of short length.","label_nlp4sg":0,"task":null,"method":null,"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2018,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} -{"ID":"mensa-etal-2017-ttcs","url":"https:\/\/aclanthology.org\/W17-1912","title":"TTCS$^\\mathcalE$: a Vectorial Resource for Computing Conceptual Similarity","abstract":"In this paper we introduce the TTCS E , a linguistic resource that relies on BabelNet, NASARI and ConceptNet, that has now been used to compute the conceptual similarity between concept pairs. The conceptual representation herein provides uniform access to concepts based on Babel-Net synset IDs, and consists of a vectorbased semantic representation which is compliant with the Conceptual Spaces, a geometric framework for common-sense knowledge representation and reasoning. The TTCS E has been evaluated in a preliminary experimentation on a conceptual similarity task.","label_nlp4sg":0,"task":null,"method":null,"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2017,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} -{"ID":"cui-etal-2017-attention","url":"https:\/\/aclanthology.org\/P17-1055","title":"Attention-over-Attention Neural Networks for Reading Comprehension","abstract":"Cloze-style reading comprehension is a representative problem in mining relationship between document and query. In this paper, we present a simple but novel model called attention-over-attention reader for better solving cloze-style reading comprehension task. The proposed model aims to place another attention mechanism over the document-level attention and induces \"attended attention\" for final answer predictions. One advantage of our model is that it is simpler than related works while giving excellent performance. In addition to the primary model, we also propose an N-best re-ranking strategy to double check the validity of the candidates and further improve the performance. Experimental results show that the proposed methods significantly outperform various state-ofthe-art systems by a large margin in public datasets, such as CNN and Children's Book Test.","label_nlp4sg":0,"task":null,"method":null,"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"We would like to thank all three anonymous reviewers for their thorough reviewing and providing thoughtful comments to improve our paper. This work was supported by the National 863 Leading Technology Research Project via grant 2015AA015409.","year":2017,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} -{"ID":"baur-etal-2016-shared","url":"https:\/\/aclanthology.org\/L16-1036","title":"A Shared Task for Spoken CALL?","abstract":"We argue that the field of spoken CALL needs a shared task in order to facilitate comparisons between different groups and methodologies, and describe a concrete example of such a task, based on data collected from a speech-enabled online tool which has been used to help young Swiss German teens practise skills in English conversation. Items are prompt-response pairs, where the prompt is a piece of German text and the response is a recorded English audio file. The task is to label pairs as \"accept\" or \"reject\", accepting responses which are grammatically and linguistically correct to match a set of hidden gold standard answers as closely as possible. Initial resources are provided so that a scratch system can be constructed with a minimal investment of effort, and in particular without necessarily using a speech recogniser. Training data for the task will be released in June 2016, and test data in January 2017.","label_nlp4sg":0,"task":null,"method":null,"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"Work at Geneva University was supported by the Swiss National Science Foundation (SNF) under grant 105219 153278\/1. We would like to thank Nuance for making their software available to us for research purposes, and Cathy Chua for helpful suggestions concerning the metric.","year":2016,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} -{"ID":"digalakis-etal-1990-fast","url":"https:\/\/aclanthology.org\/H90-1037","title":"Fast Search Algorithms for Connected Phone Recognition Using the Stochastic Segment Model","abstract":"In this paper we present methods for reducing the computation time of joint segmentation and recognition of phones using the Stochastic Segment Model (SSM). Our approach to the problem is twofold: first, we present a fast segment classification method that reduces computation by a factor of 2 to 4, depending on the confidence of choosing the most probable model. Second, we propose a Split and Merge segmentation algorithm as an alternative to the typical Dynamic Programming solution of the segmentation and recognition problem, with computation savings increasing proportionally with model complexity. Even though our current recognizer uses context-independent phone models, the results that we report on the TIMIT database for speaker independent joint segmentation and recognition are comparable to that of systems that use context information.","label_nlp4sg":0,"task":null,"method":null,"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"This work was jointly supported by NSF and DARPA under NSF grant # IRI-8902124.","year":1990,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} -{"ID":"yang-etal-2019-read","url":"https:\/\/aclanthology.org\/D19-1512","title":"Read, Attend and Comment: A Deep Architecture for Automatic News Comment Generation","abstract":"Automatic news comment generation is a new testbed for techniques of natural language generation. In this paper, we propose a \"readattend-comment\" procedure for news comment generation and formalize the procedure with a reading network and a generation network. The reading network comprehends a news article and distills some important points from it, then the generation network creates a comment by attending to the extracted discrete points and the news title. We optimize the model in an end-to-end manner by maximizing a variational lower bound of the true objective using the back-propagation algorithm. Experimental results on two datasets indicate that our model can significantly outperform existing methods in terms of both automatic evaluation and human judgment.","label_nlp4sg":0,"task":null,"method":null,"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"This work is supported in part by the National Natural Science Foundation of China (Grand Nos. U1636211, 61672081, 61370126), and the National Key R&D Program of China (No. 2016QY04W0802).","year":2019,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} -{"ID":"ke-etal-2019-araml","url":"https:\/\/aclanthology.org\/D19-1436","title":"ARAML: A Stable Adversarial Training Framework for Text Generation","abstract":"Most of the existing generative adversarial networks (GAN) for text generation suffer from the instability of reinforcement learning training algorithms such as policy gradient, leading to unstable performance. To tackle this problem, we propose a novel framework called Adversarial Reward Augmented Maximum Likelihood (ARAML). During adversarial training, the discriminator assigns rewards to samples which are acquired from a stationary distribution near the data rather than the generator's distribution. The generator is optimized with maximum likelihood estimation augmented by the discriminator's rewards instead of policy gradient. Experiments show that our model can outperform state-of-the-art text GANs with a more stable training process.","label_nlp4sg":0,"task":null,"method":null,"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"This work was supported by the National Science Foundation of China (Grant No. 61936010\/61876096) and the National Key R&D Program of China (Grant No. 2018YFC0830200). We would like to thank THUNUS NExT Joint-Lab for the support.","year":2019,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} -{"ID":"nariyama-2006-pragmatic","url":"https:\/\/aclanthology.org\/W06-3501","title":"Pragmatic information extraction from subject ellipsis in informal English","abstract":"Subject ellipsis is one of the characteristics of informal English. The investigation of subject ellipsis in corpora thus reveals an abundance of pragmatic and extralinguistic information associated with subject ellipsis that enhances natural language understanding. In essence, the presence of subject ellipsis conveys an 'informal' conversation involving 1) an informal 'Topic' as well as familiar\/close 'Participants', 2) specific 'Connotations' that are different from the corresponding full sentences: interruptive (ending discourse coherence), polite, intimate, friendly, and less determinate implicatures. This paper also construes linguistic environments that trigger the use of subject ellipsis and resolve subject ellipsis.","label_nlp4sg":0,"task":null,"method":null,"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2006,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} -{"ID":"muller-etal-2022-shot","url":"https:\/\/aclanthology.org\/2022.acl-long.584","title":"Few-Shot Learning with Siamese Networks and Label Tuning","abstract":"We study the problem of building text classifiers with little or no training data, commonly known as zero and few-shot text classification. In recent years, an approach based on neural textual entailment models has been found to give strong results on a diverse range of tasks. In this work, we show that with proper pre-training, Siamese Networks that embed texts and labels offer a competitive alternative. These models allow for a large reduction in inference cost: constant in the number of labels rather than linear. Furthermore, we introduce label tuning, a simple and computationally efficient approach that allows to adapt the models in a few-shot setup by only changing the label embeddings. While giving lower performance than model fine-tuning, this approach has the architectural advantage that a single encoder can be shared by many different tasks.","label_nlp4sg":0,"task":null,"method":null,"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"We would like to thank Francisco Rangel and the entire Symanto Research Team for early discussions, feedback and suggestions. We would also like to thank the anonymous Reviewers. The authors gratefully acknowledge the support of the Pro 2 Haters -Proactive Profiling of Hate Speech Spreaders (CDTi IDI-20210776), XAI-DisInfodemics: eXplainable AI for disinformation and conspiracy detection during infodemics (MICIN PLEC2021-007681), and DETEMP -Early Detection of Depression Detection in Social Media (IVACE IMINOD\/2021\/72) R&D grants.","year":2022,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} -{"ID":"pirinen-2011-modularisation","url":"https:\/\/aclanthology.org\/W11-4644","title":"Modularisation of Finnish Finite-State Language Description -- Towards Wide Collaboration in Open Source Development of a Morphological Analyser","abstract":"In this paper we present an open source implementation for Finnish morphological parser. We shortly evaluate it against contemporary criticism towards monolithic and unmaintainable finite-state language description. We use it to demonstrate way of writing finite-state language description that is used for varying set of projects, that typically need morphological analyser, such as POS tagging, morphological analysis, hyphenation, spell checking and correction, rule-based machine translation and syntactic analysis. The language description is done using available open source methods for building finitestate descriptions coupled with autotoolsstyle build system, which is de facto standard in open source projects.","label_nlp4sg":0,"task":null,"method":null,"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"We thank Donald Killian for pointing us towards the ongoing discussion about shortcomings of finite-state morphologies and the HFST research group, and our colleagues for fruitful discussions.","year":2011,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} -{"ID":"durgar-el-kahlout-oflazer-2006-initial","url":"https:\/\/aclanthology.org\/W06-3102","title":"Initial Explorations in English to Turkish Statistical Machine Translation","abstract":"This paper presents some very preliminary results for and problems in developing a statistical machine translation system from English to Turkish. Starting with a baseline word model trained from about 20K aligned sentences, we explore various ways of exploiting morphological structure to improve upon the baseline system. As Turkish is a language with complex agglutinative word structures, we experiment with morphologically segmented and disambiguated versions of the parallel texts in order to also uncover relations between morphemes and function words in one language with morphemes and functions words in the other, in addition to relations between open class content words. Morphological segmentation on the Turkish side also conflates the statistics from allomorphs so that sparseness can be alleviated to a certain extent. We find that this approach coupled with a simple grouping of most frequent morphemes and function words on both sides improve the BLEU score from the baseline of 0.0752 to 0.0913 with the small training data. We close with a discussion on why one should not expect distortion parameters to model word-local morpheme ordering and that a new approach to handling complex morphotactics is needed.","label_nlp4sg":0,"task":null,"method":null,"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"This work was supported by T\u00dcB\u0130TAK (Turkish Scientific and Technological Research Foundation) project 105E020 \"Building a Statistical Machine Translation for Turkish and English\".","year":2006,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} -{"ID":"nerima-etal-2003-creating","url":"https:\/\/aclanthology.org\/E03-1022","title":"Creating a multilingual collocations dictionary from large text corpora","abstract":"This paper describes a system of terminological extraction capable of handling multi-word expressions, using a powerful syntactic parser. The system includes a concordancing tool enabling the user to display the context of the collocation, i.e. the sentence or the whole document where the collocation occurs. Since the corpora are multilingual, the system also offers an alignment mechanism for the corresponding translated documents.","label_nlp4sg":0,"task":null,"method":null,"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"This work is supported by Geneva International Academic Network (GIAN), research project \"Linguistic Analysis and Collocation Extraction\", approved in 2001. Thanks to Olivier Pasteur for the invaluable help in this research.","year":2003,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} -{"ID":"flati-etal-2014-two","url":"https:\/\/aclanthology.org\/P14-1089","title":"Two Is Bigger (and Better) Than One: the Wikipedia Bitaxonomy Project","abstract":"We present WiBi, an approach to the automatic creation of a bitaxonomy for Wikipedia, that is, an integrated taxonomy of Wikipage pages and categories. We leverage the information available in either one of the taxonomies to reinforce the creation of the other taxonomy. Our experiments show higher quality and coverage than state-of-the-art resources like DBpedia, YAGO, MENTA, WikiNet and WikiTaxonomy. WiBi is available at http:\/\/wibitaxonomy.org.","label_nlp4sg":0,"task":null,"method":null,"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"The authors gratefully acknowledge the support of the ERC Starting Grant MultiJEDI No. 259234. We thank Luca Telesca for his implementation of WikiTaxonomy and Jim McManus for his comments on the manuscript.","year":2014,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} -{"ID":"liu-etal-2019-knowledge","url":"https:\/\/aclanthology.org\/D19-1187","title":"Knowledge Aware Conversation Generation with Explainable Reasoning over Augmented Graphs","abstract":"Two types of knowledge, triples from knowledge graphs and texts from documents, have been studied for knowledge aware opendomain conversation generation, in which graph paths can narrow down vertex candidates for knowledge selection decision, and texts can provide rich information for response generation. Fusion of a knowledge graph and texts might yield mutually reinforcing advantages, but there is less study on that. To address this challenge, we propose a knowledge aware chatting machine with three components, an augmented knowledge graph with both triples and texts, knowledge selector, and knowledge aware response generator. For knowledge selection on the graph, we formulate it as a problem of multi-hop graph reasoning to effectively capture conversation flow, which is more explainable and flexible in comparison with previous work. To fully leverage long text information that differentiates our graph from others, we improve a state of the art reasoning algorithm with machine reading comprehension technology. We demonstrate the effectiveness of our system on two datasets in comparison with state-of-the-art models 1 .","label_nlp4sg":0,"task":null,"method":null,"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"We would like to thank the reviewers for their insightful comments. This work was supported by the Natural Science Foundation of China (No.61533018).","year":2019,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} -{"ID":"sun-etal-2020-helpfulness","url":"https:\/\/aclanthology.org\/2020.coling-main.121","title":"On the Helpfulness of Document Context to Sentence Simplification","abstract":"Most of the research on text simplification is limited to sentence level nowadays. In this paper, we are the first to investigate the helpfulness of document context on sentence simplification and apply it to the sequence-to-sequence model. We firstly construct a sentence simplification dataset in which the contexts for the original sentence are provided by Wikipedia corpus. The new dataset contains approximately 116K sentence pairs with context. We then propose a new model that makes full use of the context information. Our model uses neural networks to learn the different effects of the preceding sentences and the following sentences on the current sentence and applies them to the improved transformer model. Evaluated on the newly constructed dataset, our model achieves 36.52 on SARI value, which outperforms the best performing model in the baselines by 2.46 (7.22%), indicating that context indeed helps improve sentence simplification. In the ablation experiment, we show that using either the preceding sentences or the following sentences as context can significantly improve simplification.","label_nlp4sg":0,"task":null,"method":null,"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"This work was supported by National Natural Science Foundation of China (61772036), Beijing Academy of Artificial Intelligence (BAAI) and Key Laboratory of Science, Technology and Standard in Press Industry (Key Laboratory of Intelligent Press Media Technology). We appreciate the anonymous reviewers for their helpful comments. Xiaojun Wan is the corresponding author.","year":2020,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} -{"ID":"detrez-ranta-2012-smart","url":"https:\/\/aclanthology.org\/E12-1066","title":"Smart Paradigms and the Predictability and Complexity of Inflectional Morphology","abstract":"Morphological lexica are often implemented on top of morphological paradigms, corresponding to different ways of building the full inflection table of a word. Computationally precise lexica may use hundreds of paradigms, and it can be hard for a lexicographer to choose among them. To automate this task, this paper introduces the notion of a smart paradigm. It is a metaparadigm, which inspects the base form and tries to infer which low-level paradigm applies. If the result is uncertain, more forms are given for discrimination. The number of forms needed in average is a measure of predictability of an inflection system. The overall complexity of the system also has to take into account the code size of the paradigms definition itself. This paper evaluates the smart paradigms implemented in the open-source GF Resource Grammar Library. Predictability and complexity are estimated for four different languages: English, French, Swedish, and Finnish. The main result is that predictability does not decrease when the complexity of morphology grows, which means that smart paradigms provide an efficient tool for the manual construction and\/or automatically bootstrapping of lexica.","label_nlp4sg":0,"task":null,"method":null,"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"We are grateful to the anonymous referees for valuable remarks and questions. The research leading to these results has received funding from the European Union's Seventh Framework Programme (FP7\/2007-2013) under grant agreement no FP7-ICT-247914 (the MOLTO project).","year":2012,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} -{"ID":"rupp-etal-2008-language","url":"http:\/\/www.lrec-conf.org\/proceedings\/lrec2008\/pdf\/556_paper.pdf","title":"Language Resources and Chemical Informatics","abstract":"Chemistry research papers are a primary source of information about chemistry, as in any scientific field. The presentation of the data is, predominantly, unstructured information, and so not immediately susceptible to processes developed within chemical informatics for carrying out chemistry research by information processing techniques. At one level, extracting the relevant information from research papers is a text mining task, requiring both extensive language resources and specialised knowledge of the subject domain. However, the papers also encode information about the way the research is conducted and the structure of the field itself. Applying language technology to research papers in chemistry can facilitate eScience on several different levels. The SciBorg project sets out to provide an extensive, analysed corpus of published chemistry research. This relies on the cooperation of several journal publishers to provide papers in an appropriate form. The work is carried out as a collaboration involving the","label_nlp4sg":1,"task":null,"method":null,"goal1":"Industry, Innovation and Infrastructure","goal2":null,"goal3":null,"acknowledgments":"We are very grateful to the Royal Society of Chemistry, Nature Publishing Group and the International Union of Crystallography for supplying papers. This work was funded by EPSRC (EP\/C010035\/1) with additional support from Boeing.","year":2008,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":1,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} -{"ID":"mikheev-2002-periods","url":"https:\/\/aclanthology.org\/J02-3002","title":"Periods, Capitalized Words, etc.","abstract":"In this article we present an approach for tackling three important aspects of text normalization: sentence boundary disambiguation, disambiguation of capitalized words in positions where capitalization is expected, and identification of abbreviations. As opposed to the two dominant techniques of computing statistics or writing specialized grammars, our document-centered approach works by considering suggestive local contexts and repetitions of individual words within a document. This approach proved to be robust to domain shifts and new lexica and produced performance on the level with the highest reported results. When incorporated into a part-of-speech tagger, it helped reduce the error rate significantly on capitalized words and sentence boundaries. We also investigated the portability to other languages and obtained encouraging results.","label_nlp4sg":0,"task":null,"method":null,"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"The work reported in this article was supported in part by grant GR\/L21952 (Text Tokenization Tool) from the Engineering and Physical Sciences Research Council, U.K., and also it benefited from the ongoing efforts in building domain-independent text-processing software at Infogistics Ltd. I am also grateful to one anonymous reviewer who put a lot of effort into making this article as it is now.","year":2002,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} -{"ID":"zhang-etal-2021-de","url":"https:\/\/aclanthology.org\/2021.acl-long.371","title":"De-biasing Distantly Supervised Named Entity Recognition via Causal Intervention","abstract":"Distant supervision tackles the data bottleneck in NER by automatically generating training instances via dictionary matching. Unfortunately, the learning of DS-NER is severely dictionary-biased, which suffers from spurious correlations and therefore undermines the effectiveness and the robustness of the learned models. In this paper, we fundamentally explain the dictionary bias via a Structural Causal Model (SCM), categorize the bias into intra-dictionary and inter-dictionary biases, and identify their causes. Based on the SCM, we learn de-biased DS-NER via causal interventions. For intra-dictionary bias, we conduct backdoor adjustment to remove the spurious correlations introduced by the dictionary confounder. For inter-dictionary bias, we propose a causal invariance regularizer which will make DS-NER models more robust to the perturbation of dictionaries. Experiments on four datasets and three DS-NER models show that our method can significantly improve the performance of DS-NER.","label_nlp4sg":0,"task":null,"method":null,"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"This work is supported by the National Natural Science Foundation of China under Grants no.U1936207, Beijing Academy of Artificial Intelligence (BAAI2019QN0502), scientific research projects of the State Language Commission (YW135-78), and in part by the Youth Innovation Promotion Association CAS(2018141). Moreover, we thank all reviewers for their valuable comments and suggestions.","year":2021,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} -{"ID":"das-etal-2021-emotion","url":"https:\/\/aclanthology.org\/2021.naacl-srw.19","title":"Emotion Classification in a Resource Constrained Language Using Transformer-based Approach","abstract":"Although research on emotion classification has significantly progressed in highresource languages, it is still infancy for resource-constrained languages like Bengali. However, unavailability of necessary language processing tools and deficiency of benchmark corpora makes the emotion classification task in Bengali more challenging and complicated. This work proposes a transformer-based technique to classify the Bengali text into one of the six basic emotions: anger, fear, disgust, sadness, joy, and surprise. A Bengali emotion corpus consists of 6243 texts is developed for the classification task. Experimentation carried out using various machine learning (LR, RF, MNB, SVM), deep neural networks (CNN, BiLSTM, CNN+BiLSTM) and transformer (Bangla-BERT, m-BERT, XLM-R) based approaches. Experimental outcomes indicate that XLM-R outdoes all other techniques by achieving the highest weighted f 1-score of 69.73% on the test data. The dataset is publicly available at https:\/\/github.com\/omar-sharif03\/ NAACL-SRW-2021.","label_nlp4sg":0,"task":null,"method":null,"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"We sincerely acknowledge the anonymous reviewers and pre-submission mentor for their insightful suggestions, which help improve the work. This work was supported by the Directorate of Research & Extension, CUET.","year":2021,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} -{"ID":"wu-etal-2021-code","url":"https:\/\/aclanthology.org\/2021.findings-acl.93","title":"Code Summarization with Structure-induced Transformer","abstract":"Code summarization (CS) is becoming a promising area in recent language understanding, which aims to generate sensible human language automatically for programming language in the format of source code, serving in the most convenience of programmer developing. It is well known that programming languages are highly structured. Thus previous works attempt to apply structurebased traversal (SBT) or non-sequential models like Tree-LSTM and graph neural network (GNN) to learn structural program semantics. However, it is surprising that incorporating SBT into advanced encoder like Transformer instead of LSTM has been shown no performance gain, which lets GNN become the only rest means modeling such necessary structural clue in source code. To release such inconvenience, we propose structureinduced Transformer, which encodes sequential code inputs with multi-view structural clues in terms of a newly-proposed structureinduced self-attention mechanism. Extensive experiments show that our proposed structureinduced Transformer helps achieve new stateof-the-art results on benchmarks.","label_nlp4sg":0,"task":null,"method":null,"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2021,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} -{"ID":"zhang-fung-2007-speech","url":"https:\/\/aclanthology.org\/N07-2054","title":"Speech Summarization Without Lexical Features for Mandarin Broadcast News","abstract":"We present the first known empirical study on speech summarization without lexical features for Mandarin broadcast news. We evaluate acoustic, lexical and structural features as predictors of summary sentences. We find that the summarizer yields good performance at the average Fmeasure of 0.5646 even by using the combination of acoustic and structural features alone, which are independent of lexical features. In addition, we show that structural features are superior to lexical features and our summarizer performs surprisingly well at the average F-measure of 0.3914 by using only acoustic features. These findings enable us to summarize speech without placing a stringent demand on speech recognition accuracy.","label_nlp4sg":0,"task":null,"method":null,"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2007,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} -{"ID":"chen-di-eugenio-2013-multimodality","url":"https:\/\/aclanthology.org\/W13-4031","title":"Multimodality and Dialogue Act Classification in the RoboHelper Project","abstract":"We describe the annotation of a multimodal corpus that includes pointing gestures and haptic actions (force exchanges). Haptic actions are rarely analyzed as fullfledged components of dialogue, but our data shows haptic actions are used to advance the state of the interaction. We report our experiments on recognizing Dialogue Acts in both offline and online modes. Our results show that multimodal features and the dialogue game aid in DA classification.","label_nlp4sg":0,"task":null,"method":null,"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"This work is supported by award IIS 0905593 from the National Science Foundation. Thanks to the other members of the RoboHelper project, for their many contributions, especially to the data collection effort.","year":2013,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} -{"ID":"nakano-etal-2011-two","url":"https:\/\/aclanthology.org\/W11-2004","title":"A Two-Stage Domain Selection Framework for Extensible Multi-Domain Spoken Dialogue Systems","abstract":"This paper describes a general and effective domain selection framework for multi-domain spoken dialogue systems that employ distributed domain experts. The framework consists of two processes: deciding if the current domain continues and estimating the probabilities for selecting other domains. If the current domain does not continue, the domain with the highest activation probability is selected. Since those processes for each domain expert can be designed independently from other experts and can use a large variety of information, the framework achieves both extensibility and robustness against speech recognition errors. The results of an experiment using a corpus of dialogues between humans and a multi-domain dialogue system demonstrate the viability of the proposed framework.","label_nlp4sg":0,"task":null,"method":null,"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"The authors would like to thank Hiroshi Tsujino, Yuji Hasegawa, and Hiromi Narimatsu for their support for this research.","year":2011,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} -{"ID":"jung-shim-2020-dual","url":"https:\/\/aclanthology.org\/2020.coling-main.564","title":"Dual Supervision Framework for Relation Extraction with Distant Supervision and Human Annotation","abstract":"Relation extraction (RE) has been extensively studied due to its importance in real-world applications such as knowledge base construction and question answering. Most of the existing works train the models on either distantly supervised data or human-annotated data. To take advantage of the high accuracy of human annotation and the cheap cost of distant supervision, we propose the dual supervision framework which effectively utilizes both types of data. However, simply combining the two types of data to train a RE model may decrease the prediction accuracy since distant supervision has labeling bias. We employ two separate prediction networks HA-Net and DS-Net to predict the labels by human annotation and distant supervision, respectively, to prevent the degradation of accuracy by the incorrect labeling of distant supervision. Furthermore, we propose an additional loss term called disagreement penalty to enable HA-Net to learn from distantly supervised labels. In addition, we exploit additional networks to adaptively assess the labeling bias by considering contextual information. Our performance study on sentence-level and document-level REs confirms the effectiveness of the dual supervision framework.","label_nlp4sg":0,"task":null,"method":null,"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"This work was supported by Next-Generation Information Computing Development Program through the National Research Foundation of Korea(NRF) funded by the Ministry of Science, ICT (No. NRF-2017M3C4A7063570) and was also supported by Institute of Information & communications Technology Planning & Evaluation(IITP) grant funded by the Korea government(MSIT) (No. 2020-0-00857, Development of cloud robot intelligence augmentation, sharing and framework technology to integrate and enhance the intelligence of multiple robots). This research was results of a study on the \"HPC Support\" Project, supported by the Ministry of Science and ICT and NIPA.","year":2020,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} -{"ID":"zhen-etal-2021-chinese","url":"https:\/\/aclanthology.org\/2021.emnlp-main.796","title":"Chinese Opinion Role Labeling with Corpus Translation: A Pivot Study","abstract":"Opinion Role Labeling (ORL), aiming to identify the key roles of opinion, has received increasing interest. Unlike most of the previous works focusing on the English language, in this paper, we present the first work of Chinese ORL. We construct a Chinese dataset by manually translating and projecting annotations from a standard English MPQA dataset. Then, we investigate the effectiveness of cross-lingual transfer methods, including model transfer and corpus translation. We exploit multilingual BERT with Contextual Parameter Generator and Adapter methods to examine the potentials of unsupervised crosslingual learning and our experiments and analyses for both bilingual and multilingual transfers establish a foundation for the future research of this task 1 .","label_nlp4sg":0,"task":null,"method":null,"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"We thank all reviewers for their helpful comments. This work was supported by National Natural Science Foundation of China under grants 62076173 and 61672211.","year":2021,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} -{"ID":"su-chang-1988-semantic","url":"https:\/\/aclanthology.org\/C88-2133","title":"Semantic and Syntactic Aspects of Score Function","abstract":"In a Machine Translation System (MTS), the number of possible analyses for a given sentence is largely dve to the ambiguous characteristics of the source language.\nIn this paper, a mechanism, called \"Score Function\", is proposed for measuring the \"quality\" of the ambiguous syntax trees such that the one that best fits interpretation by human is selected.","label_nlp4sg":0,"task":null,"method":null,"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"We would like to express our deepest appreciation to Wen-t%~eh Li and Hsue-Hueh Hsu for their work on the simulations, to the whole linguistic group at BTC R&I) center for their work on the database, and Mei-Hui Su for her editing.Special thanks are given to Behavior Tech. Computer Co. for their full financial support of this project.","year":1988,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} -{"ID":"cresti-etal-2004-c","url":"http:\/\/www.lrec-conf.org\/proceedings\/lrec2004\/pdf\/357.pdf","title":"The C-ORAL-ROM CORPUS. A Multilingual Resource of Spontaneous Speech for Romance Languages","abstract":"The CORAL -ROM project has delivered a multilingual corpus of spontaneous speech for the main romance languages (Italian, French, Portuguese and Spanish). The collection aims to represent the variety of speech acts performed in everyday language and to enable the description of prosodic and syntactic structures in the four romance languages. Sampling criteria are defined in a corpus design scheme. CORAL -ROM adopts two different sampling strategies, one for the formal and one for the informal part: While a set of typical domains of application is selected to document the formal use of language, the informal part documents speech variation using parameters referring to the event's structure (dialogue vs. monologue) and the sociological domain of use (family-private vs public). The four romance corpora are tagged with respect to terminal and non terminal prosodic breaks. Terminal breaks are assumed to be the more relevant cues for the identification of relevant linguistic domains in spontaneous speech (utterances). Relations with other concurrent criteria are discussed. The multimedia storage of the CORAL -ROM corpus is based on this principle; each textual string ending with a terminal break is aligned, through the Win Pitch speech software, to its acoustic counterpart, generating the data base of all utterances.","label_nlp4sg":0,"task":null,"method":null,"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2004,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} -{"ID":"dubremetz-nivre-2014-extraction","url":"https:\/\/aclanthology.org\/W14-0812","title":"Extraction of Nominal Multiword Expressions in French","abstract":"Multiword expressions (MWEs) can be extracted automatically from large corpora using association measures, and tools like mwetoolkit allow researchers to generate training data for MWE extraction given a tagged corpus and a lexicon. We use mwetoolkit on a sample of the French Europarl corpus together with the French lexicon Dela, and use Weka to train classifiers for MWE extraction on the generated training data. A manual evaluation shows that the classifiers achieve 60-75% precision and that about half of the MWEs found are novel and not listed in the lexicon. We also investigate the impact of the patterns used to generate the training data and find that this can affect the trade-off between precision and novelty.","label_nlp4sg":0,"task":null,"method":null,"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2014,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} -{"ID":"snider-diab-2006-unsupervised","url":"https:\/\/aclanthology.org\/N06-2039","title":"Unsupervised Induction of Modern Standard Arabic Verb Classes","abstract":"We exploit the resources in the Arabic Treebank (ATB) for the novel task of automatically creating lexical semantic verb classes for Modern Standard Arabic (MSA). Verbs are clustered into groups that share semantic elements of meaning as they exhibit similar syntactic behavior. The results of the clustering experiments are compared with a gold standard set of classes, which is approximated by using the noisy English translations provided in the ATB to create Levin-like classes for MSA. The quality of the clusters is found to be sensitive to the inclusion of information about lexical heads of the constituents in the syntactic frames, as well as parameters of the clustering algorithm. The best set of parameters yields an F \u03b2=1 score of 0.501, compared to a random baseline with an F \u03b2=1 score of 0.37.","label_nlp4sg":0,"task":null,"method":null,"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2006,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} -{"ID":"boitet-1989-motivations","url":"https:\/\/aclanthology.org\/1989.mtsummit-1.30","title":"Motivations, aims and architecture of the LIDIA project","abstract":"At the first Machine Translation Summit in Hakone, 2 years ago, I had been asked to present the research directions envisaged at GETA (Groupe d'Etude pour la Traduction Automatique). At that time, we were just emerging from a 3-year effort of technological transfer (CALLIOPE), and considering many directions for future work. Very soon afterwards came the time to choose between all open possibilities.\nBesides 3 main research themes (\"static\" grammars, lexical data bases and software problem linked with multilinguality), we have recently embarked on the LIDIA project to crystallize the efforts of the team. It may be interesting here to explain briefly the motivations, the aims, and the oven architecture of this project.","label_nlp4sg":0,"task":null,"method":null,"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":1989,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} -{"ID":"chia-etal-2022-come","url":"https:\/\/aclanthology.org\/2022.ecnlp-1.22","title":"``Does it come in black?'' CLIP-like models are zero-shot recommenders","abstract":"Product discovery is a crucial component for online shopping. However, item-to-item recommendations today do not allow users to explore changes along selected dimensions: given a query item, can a model suggest something similar but in a different color? We consider item recommendations of the comparative nature (e.g. \"something darker\") and show how CLIP-based models can support this use case in a zero-shot manner. Leveraging a large model built for fashion, we introduce GradREC and its industry potential, and offer a first rounded assessment of its strength and weaknesses. * * GradRECS started as a (failed) experiment by JT; PC actually made it work, and he is the lead researcher on the project. FB, CG and DC all contributed to the paper, providing support for modelling, industry context and domain knowledge. PC and JT are the corresponding authors.","label_nlp4sg":0,"task":null,"method":null,"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2022,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} -{"ID":"wang-hirschberg-1991-predicting","url":"https:\/\/aclanthology.org\/H91-1074","title":"Predicting Intonational Boundaries Automatically from Text: The ATIS Domain","abstract":"Relating the intonational characteristics of an utterance to other features inferable from its text is important both for speech recognition and for speech synthesis. This work investigates techniques for predicting the location of intonational phrase boundaries in natural speech, through analyzing a utterances from the DARPA Air Travel Information Service database. For statistical modeling, we employ Classification and Regression Tree (CART) techniques. We achieve success rates of just over 90%.","label_nlp4sg":0,"task":null,"method":null,"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":1991,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} -{"ID":"ilichev-etal-2021-multiple","url":"https:\/\/aclanthology.org\/2021.ranlp-1.68","title":"Multiple Teacher Distillation for Robust and Greener Models","abstract":"The language models nowadays are in the center of natural language processing progress. These models are mostly of significant size. There are successful attempts to reduce them, but at least some of these attempts rely on randomness. We propose a novel distillation procedure leveraging on multiple teachers usage which alleviates random seed dependency and makes the models more robust. We show that this procedure applied to TinyBERT and Dis-tilBERT models improves their worst case results up to 2% while keeping almost the same best-case ones. The latter fact keeps true with a constraint on computational time, which is important to lessen the carbon footprint. In addition, we present the results of an application of the proposed procedure to a computer vision model ResNet, which shows that the statement keeps true in this totally different domain.","label_nlp4sg":1,"task":null,"method":null,"goal1":"Responsible Consumption and Production","goal2":null,"goal3":null,"acknowledgments":null,"year":2021,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":1,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} -{"ID":"lee-etal-2020-discrepancy","url":"https:\/\/aclanthology.org\/2020.spnlp-1.10","title":"On the Discrepancy between Density Estimation and Sequence Generation","abstract":"Many sequence-to-sequence generation tasks, including machine translation and text-tospeech, can be posed as estimating the density of the output y given the input x: p(y|x). Given this interpretation, it is natural to evaluate sequence-to-sequence models using conditional log-likelihood on a test set. However, the goal of sequence-to-sequence generation (or structured prediction) is to find the best output\u0177 given an input x, and each task has its own downstream metric R that scores a model output by comparing against a set of references y * : R(\u0177, y * |x). While we hope that a model that excels in density estimation also performs well on the downstream metric, the exact correlation has not been studied for sequence generation tasks. In this paper, by comparing several density estimators on five machine translation tasks, we find that the correlation between rankings of models based on log-likelihood and BLEU varies significantly depending on the range of the model families being compared. First, log-likelihood is highly correlated with BLEU when we consider models within the same family (e.g. autoregressive models, or latent variable models with the same parameterization of the prior). However, we observe no correlation between rankings of models across different families: (1) among non-autoregressive latent variable models, a flexible prior distribution is better at density estimation but gives worse generation quality than a simple prior, and (2) autoregressive models offer the best translation performance overall, while latent variable models with a normalizing flow prior give the highest held-out log-likelihood across all datasets.","label_nlp4sg":0,"task":null,"method":null,"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"We thank our colleagues at the Google Translate and Brain teams, particularly Durk Kingma, Yu Zhang, Yuan Cao and Julia Kreutzer for their feedback on the draft. JL thanks Chunting Zhou, Manoj Kumar and William Chan for helpful discussions.KC is supported by Samsung Advanced Institute of Technology (Next Generation Deep Learning: from pattern recognition to AI), Samsung Research (Improving Deep Learning using Latent Structure) and NSF Award 1922658 NRT-HDR: FUTURE Foundations, Translation, and Responsibility for Data Science. KC thanks CIFAR, eBay, Naver and NVIDIA for their support.","year":2020,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} -{"ID":"huang-xiang-2010-feature","url":"https:\/\/aclanthology.org\/C10-1056","title":"Feature-Rich Discriminative Phrase Rescoring for SMT","abstract":"This paper proposes a new approach to phrase rescoring for statistical machine translation (SMT). A set of novel features capturing the translingual equivalence between a source and a target phrase pair are introduced. These features are combined with linear regression model and neural network to predict the quality score of the phrase translation pair. These phrase scores are used to discriminatively rescore the baseline MT system's phrase library: boost good phrase translations while prune bad ones. This approach not only significantly improves machine translation quality, but also reduces the model size by a considerable margin.","label_nlp4sg":0,"task":null,"method":null,"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2010,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} -{"ID":"yano-etal-2010-shedding","url":"https:\/\/aclanthology.org\/W10-0723","title":"Shedding (a Thousand Points of) Light on Biased Language","abstract":"This paper considers the linguistic indicators of bias in political text. We used Amazon Mechanical Turk judgments about sentences from American political blogs, asking annotators to indicate whether a sentence showed bias, and if so, in which political direction and through which word tokens. We also asked annotators questions about their own political views. We conducted a preliminary analysis of the data, exploring how different groups perceive bias in different blogs, and showing some lexical indicators strongly associated with perceived bias.","label_nlp4sg":1,"task":null,"method":null,"goal1":"Peace, Justice and Strong Institutions","goal2":null,"goal3":null,"acknowledgments":"The authors acknowledge research support from HP Labs, help with data from Jacob Eisenstein, and helpful comments from the reviewers, Olivia Buzek, Michael Heilman, and Brendan O'Connor.","year":2010,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":1,"sdg17":0} -{"ID":"barnes-etal-2019-lexicon","url":"https:\/\/aclanthology.org\/W19-6119","title":"Lexicon information in neural sentiment analysis: a multi-task learning approach","abstract":"This paper explores the use of multi-task learning (MTL) for incorporating external knowledge in neural models. Specifically, we show how MTL can enable a BiLSTM sentiment classifier to incorporate information from sentiment lexicons. Our MTL setup is shown to improve model performance (compared to a single-task setup) on both English and Norwegian sentence-level sentiment datasets. The paper also introduces a new sentiment lexicon for Norwegian.","label_nlp4sg":0,"task":null,"method":null,"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"This work has been carried out as part of the SANT project (Sentiment Analysis for Norwegian Text), funded by the Research Council of Norway (grant number 270908).","year":2019,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} -{"ID":"walker-etal-1997-paradise","url":"https:\/\/aclanthology.org\/P97-1035","title":"PARADISE: A Framework for Evaluating Spoken Dialogue Agents","abstract":"This paper presents PARADISE (PARAdigm for Dialogue System Evaluation), a general framework for evaluating spoken dialogue agents. The framework decouples task requirements from an agent's dialogue behaviors, supports comparisons among dialogue strategies, enables the calculation of performance over subdialogues and whole dialogues, specifies the relative contribution of various factors to performance, and makes it possible to compare agents performing different tasks by normalizing for task complexity.","label_nlp4sg":0,"task":null,"method":null,"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"We would like to thank James Allen, Jennifer Chu-Carroll, Morena Danieli, Wieland Eckert, Giuseppe Di Fabbrizio, Don Hindle, Julia Hirschberg, Shri Narayanan, Jay Wilpon, Steve Whittaker and three anonymous reviews for helpful discussion and comments on earlier versions of this paper.","year":1997,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} -{"ID":"hill-korhonen-2014-concreteness","url":"https:\/\/aclanthology.org\/P14-2118","title":"Concreteness and Subjectivity as Dimensions of Lexical Meaning","abstract":"We quantify the lexical subjectivity of adjectives using a corpus-based method, and show for the first time that it correlates with noun concreteness in large corpora. These cognitive dimensions together influence how word meanings combine, and we exploit this fact to achieve performance improvements on the semantic classification of adjective-noun pairs.","label_nlp4sg":0,"task":null,"method":null,"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"The authors are supported by St John's College, Cambridge and The Royal Society.","year":2014,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} -{"ID":"dutta-etal-2020-uds","url":"https:\/\/aclanthology.org\/2020.wmt-1.129","title":"UdS-DFKI@WMT20: Unsupervised MT and Very Low Resource Supervised MT for German-Upper Sorbian","abstract":"This paper describes the UdS-DFKI submission to the shared task for unsupervised machine translation (MT) and very low-resource supervised MT between German (de) and Upper Sorbian (hsb) at the Fifth Conference of Machine Translation (WMT20). We submit systems for both the supervised and unsupervised tracks. Apart from various experimental approaches like bitext mining, model pretraining, and iterative back-translation, we employ a factored machine translation approach on a small BPE vocabulary.","label_nlp4sg":0,"task":null,"method":null,"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"The authors thank the German Research Center for Artificial Intelligence (DFKI GmbH) for pro-","year":2020,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} -{"ID":"zhang-etal-2012-automatically","url":"http:\/\/www.lrec-conf.org\/proceedings\/lrec2012\/pdf\/244_Paper.pdf","title":"Automatically Extracting Procedural Knowledge from Instructional Texts using Natural Language Processing","abstract":"Procedural knowledge is the knowledge required to perform certain tasks, and forms an important part of expertise. A major source of procedural knowledge is natural language instructions. While these readable instructions have been useful learning resources for human, they are not interpretable by machines. Automatically acquiring procedural knowledge in machine interpretable formats from instructions has become an increasingly popular research topic due to their potential applications in process automation. However, it has been insufficiently addressed. This paper presents an approach and an implemented system to assist users to automatically acquire procedural knowledge in structured forms from instructions. We introduce a generic semantic representation of procedures for analysing instructions, using which natural language techniques are applied to automatically extract structured procedures from instructions. The method is evaluated in three domains to justify the generality of the proposed semantic representation as well as the effectiveness of the implemented automatic system.","label_nlp4sg":0,"task":null,"method":null,"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2012,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} -{"ID":"rosa-etal-2017-slavic","url":"https:\/\/aclanthology.org\/W17-1226","title":"Slavic Forest, Norwegian Wood","abstract":"D We once had a corp, or should we say, C it once had D us D They showed us its tags, isn't it great, C unified D tags Dmi They asked us to parse and they told us to use G everything Dmi So we looked around and we noticed there was near Em nothing AA7 We took other langs, bitext aligned: words one-to-one We played for two weeks, and then they said, here is the test The parser kept training till morning, just until deadline So we had to wait and hope what we get would be just fine And, when we awoke, the results were done, we saw we'd won So, we wrote this paper, isn't it good, Norwegian wood.","label_nlp4sg":0,"task":null,"method":null,"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"The work was supported by the grant 15-10472S of the Czech Science Foundation, SVV grant of Charles University, and by the EU project H2020-ICT-2014-1-644402. This work has been using language resources and tools developed, stored and distributed by the LINDAT\/CLARIN project of the Ministry of Education, Youth and Sports of the Czech Republic (project LM2015071).","year":2017,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} -{"ID":"gregory-altun-2004-using","url":"https:\/\/aclanthology.org\/P04-1086","title":"Using Conditional Random Fields to Predict Pitch Accents in Conversational Speech","abstract":"The detection of prosodic characteristics is an important aspect of both speech synthesis and speech recognition. Correct placement of pitch accents aids in more natural sounding speech, while automatic detection of accents can contribute to better wordlevel recognition and better textual understanding. In this paper we investigate probabilistic, contextual, and phonological factors that influence pitch accent placement in natural, conversational speech in a sequence labeling setting. We introduce Conditional Random Fields (CRFs) to pitch accent prediction task in order to incorporate these factors efficiently in a sequence model. We demonstrate the usefulness and the incremental effect of these factors in a sequence model by performing experiments on hand labeled data from the Switchboard Corpus. Our model outperforms the baseline and previous models of pitch accent prediction on the Switchboard Corpus.","label_nlp4sg":0,"task":null,"method":null,"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"This work was partially funded by CAREER award #IIS 9733067 IGERT. We would also like to thank Mark Johnson for the idea of this project, Dan Jurafsky, Alan Bell, Cynthia Girand, and Jason Brenier for their helpful comments and help with the database.","year":2004,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} -{"ID":"arnold-etal-2017-counterfactual","url":"https:\/\/aclanthology.org\/I17-2009","title":"Counterfactual Language Model Adaptation for Suggesting Phrases","abstract":"Mobile devices use language models to suggest words and phrases for use in text entry. Traditional language models are based on contextual word frequency in a static corpus of text. However, certain types of phrases, when offered to writers as suggestions, may be systematically chosen more often than their frequency would predict. In this paper, we propose the task of generating suggestions that writers accept, a related but distinct task to making accurate predictions. Although this task is fundamentally interactive, we propose a counterfactual setting that permits offline training and evaluation. We find that even a simple language model can capture text characteristics that improve acceptability.","label_nlp4sg":0,"task":null,"method":null,"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"Acknowledgements Kai-Wei Chang was supported in part by National Science Foundation Grant IIS-1657193. Part of the work was done while Kai-Wei Chang and Kenneth C. Arnold visited Microsoft Research, Cambridge.","year":2017,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} -{"ID":"sutcliffe-kurohashi-2000-parallel","url":"http:\/\/www.lrec-conf.org\/proceedings\/lrec2000\/pdf\/248.pdf","title":"A Parallel English-Japanese Query Collection for the Evaluation of On-Line Help Systems","abstract":"An experiment concerning the creation of parallel evaluation data for information retrieval is presented. A set of English queries was gathered for the domain of wordprocessing using Lotus Ami Pro. A set of Japanese queries was then created from these. The answers to the queries were elicited from eight respondents comprising four native speakers of each language. We first describe how the queries were created and the answers elicited. We then present analyses of the responses in each language. The results show a lower level of agreeement between respondents than was expected. We discuss a refinement of the elicitation process which is designed to address this problem as well as measuring the integrity of individual respondents.","label_nlp4sg":0,"task":null,"method":null,"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2000,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} -{"ID":"bernard-danlos-2016-modelling","url":"https:\/\/aclanthology.org\/W16-3304","title":"Modelling Discourse in STAG: Subordinate Conjunctions and Attributing Phrases","abstract":"We propose a new model in STAG syntax and semantics for subordinate conjunctions (SubConjs) and attributing phrases-attitude\/reporting verbs (AVs; believe, say) and attributing prepositional phrase (APPs; according to). This model is discourse-oriented, and is based on the observation that SubConjs and AVs are not homogeneous categories. Indeed, previous work has shown that SubConjs can be divided into two classes according to their syntactic and semantic properties. Similarly, AVs have two different uses in discourse: evidential and intentional. While evidential AVs and APPs have strong semantic similarities, they do not appear in the same contexts when SubConjs are at play. Our proposition aims at representing these distinctions and capturing these various discourse-related interactions.","label_nlp4sg":0,"task":null,"method":null,"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2016,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} -{"ID":"chakraborty-etal-2011-semantic","url":"https:\/\/aclanthology.org\/W11-0803","title":"Semantic Clustering: an Attempt to Identify Multiword Expressions in Bengali","abstract":"One of the key issues in both natural language understanding and generation is the appropriate processing of Multiword Expressions (MWEs). MWE can be defined as a semantic issue of a phrase where the meaning of the phrase may not be obtained from its constituents in a straightforward manner. This paper presents an approach of identifying bigram noun-noun MWEs from a medium-size Bengali corpus by clustering the semantically related nouns and incorporating a vector space model for similarity measurement. Additional inclusion of the English WordNet::Similarity module also improves the results considerably. The present approach also contributes to locate clusters of the synonymous noun words present in a document. Experimental results draw a satisfactory conclusion after analyzing the Precision, Recall and F-score values.","label_nlp4sg":0,"task":null,"method":null,"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"The work reported in this paper is supported by a grant from the \"Indian Language to Indian Language Machine Translation (IL-ILMT) System Phrase II\", funded by Department of Information and Technology (DIT), Govt. of India.","year":2011,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} -{"ID":"nakatani-1991-resolving","url":"https:\/\/aclanthology.org\/P91-1053","title":"Resolving a Pragmatic Prepositional Phrase Attachment Ambiguity","abstract":"To resolve or not to resolve, that is the structural ambiguity dilemma. The traditional wisdom is to disambiguate only when it matters in terms of the meaning of the utterance, and to do so using the computationally least costly information. NLP work on PP-attachment has followed this wisdom, and much effort has been focused on formulating structural and lexical strategies for resolving noun-phrase and verb-phrase (NP-PP vs. VP-PP) attachment ambiguity (e.g. [8, 11] ). In one study, statistical analysis of the distribution of lexical items in a very large text yielded 78% correct parses while two humans achieved just 85% [5] . The close performance of machine and human led the authors to pose two issues that will be addressed in this paper: is the predictive power of distributional data due to \"a complementation relation, a modification relation, or something else\", and what characterizes the attachments that escape prediction?","label_nlp4sg":0,"task":null,"method":null,"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"The author thanks Barbara Grosz and Julia Hirschberg, who both advised this research, for valuable comments and guidance; and acknowledges current support from a National Science Foundation Graduate Fellowship. This paper stems from research carried out at Harvard University and at AT&T Bell Laboratories.","year":1991,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} -{"ID":"jabrayilzade-tekir-2020-lgpsolver","url":"https:\/\/aclanthology.org\/2020.findings-emnlp.100","title":"LGPSolver - Solving Logic Grid Puzzles Automatically","abstract":"Logic grid puzzle (LGP) is a type of word problem where the task is to solve a problem in logic. Constraints for the problem are given in the form of textual clues. Once these clues are transformed into formal logic, a deductive reasoning process provides the solution. Solving logic grid puzzles in a fully automatic manner has been a challenge since a precise understanding of clues is necessary to develop the corresponding formal logic representation. To meet this challenge, we propose a solution that uses a DistilBERT-based classifier to classify a clue into one of the predefined predicate types for logic grid puzzles. Another novelty of the proposed solution is the recognition of comparison structures in clues. By collecting comparative adjectives from existing dictionaries and utilizing a semantic framework to catch comparative quantifiers, the semantics of clues concerning comparison structures are better understood, ensuring conversion to correct logic representation. Our approach solves logic grid puzzles in a fully automated manner with 100% accuracy on the given puzzle datasets and outperforms state-of-the-art solutions by a large margin.","label_nlp4sg":0,"task":null,"method":null,"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"We thank Tugkan Tuglular for his helpful suggestions on an earlier version of this paper.We also thank anonymous reviewers for their valuable comments.","year":2020,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} -{"ID":"cao-etal-2020-balanced","url":"https:\/\/aclanthology.org\/2020.coling-main.432","title":"Balanced Joint Adversarial Training for Robust Intent Detection and Slot Filling","abstract":"Joint intent detection and slot filling has recently achieved tremendous success in advancing the performance of utterance understanding. However, many joint models still suffer from the robustness problem, especially on noisy inputs or rare\/unseen events. To address this issue, we propose a Joint Adversarial Training (JAT) model to improve the robustness of joint intent detection and slot filling, which consists of two parts: (1) automatically generating joint adversarial examples to attack the joint model, and (2) training the model to defend against the joint adversarial examples so as to robustify the model on small perturbations. As the generated joint adversarial examples have different impacts on the intent detection and slot filling loss, we further propose a Balanced Joint Adversarial Training (BJAT) model that applies a balance factor as a regularization term to the final loss function, which yields a stable training procedure. Extensive experiments and analyses on the lightweight models show that our proposed methods achieve significantly higher scores and substantially improve the robustness of both intent detection and slot filling. In addition, the combination of our BJAT with BERT-large achieves state-of-the-art results on two datasets.","label_nlp4sg":0,"task":null,"method":null,"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"This work is supported by the National Key R&D Program of China (2019YFB1406302), National Natural Science Foundation of China (No. 61502033, 61472034, 61772071, 61272361 and 61672098) and the Fundamental Research Funds for the Central Universities.","year":2020,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} -{"ID":"tannier-moriceau-2010-fidji","url":"http:\/\/www.lrec-conf.org\/proceedings\/lrec2010\/pdf\/68_Paper.pdf","title":"FIDJI: Web Question-Answering at Quaero 2009","abstract":"This paper presents the participation of FIDJI system to the Web Question-Answering evaluation campaign organized by Quaero in 2009. FIDJI is an open-domain question-answering system which combines syntactic information with traditional QA techniques such as named entity recognition and term weighting in order to validate answers through multiple documents. It was originally designed to process \"clean\" document collections. Overall results are significantly lower than in traditional campaigns but results (for French evaluation) are quite good compared to other state-of-the-art systems. They show that a syntax-based strategy, applied on uncleaned Web data, can still obtain good results. Moreover, we obtain much higher scores on \"complex\" questions, i.e. 'how' and 'why' questions, which are more representative of real user needs. These results show that questioning the Web with advanced linguistic techniques can be done without heavy pre-processing and with results that come near to best systems that use strong resources and large structured indexes.","label_nlp4sg":0,"task":null,"method":null,"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"This work has been partially financed by OSEO under the Quaero program.","year":2010,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} -{"ID":"wu-hsieh-2010-pycwn","url":"https:\/\/aclanthology.org\/C10-3002","title":"PyCWN: a Python Module for Chinese Wordnet","abstract":"This presentation introduces a Python module (PyCWN) for accessing and processing Chinese lexical resources. In particular, our focus is put on the Chinese Wordnet (CWN) that has been developed and released by CWN group at Academia Sinica. PyCWN provides the access to Chinese Wordnet (sense and relation data) under the Python environment. The presenation further demonstrates how this module applies to a variety of lexical processing tasks as well as the potentials for multilingual lexical processing.","label_nlp4sg":0,"task":null,"method":null,"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2010,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} -{"ID":"nakamura-2007-two","url":"https:\/\/aclanthology.org\/Y07-1035","title":"Two Types of Complex Predicate Formation: Japanese Passive and Potential Verbs","abstract":"This paper deals with the complex verb formation of passive and potential predicates and syntactic structures projected by these verbs. Though both predicates are formed with the suffix-rare which has been assumed to originate from the same stem, they show significantly different syntactic behaviors. We propose two kinds of concatenation of base verbs and auxiliaries; passive verbs are lexically formed with the most restrictive mode of combination, while potential verbs are formed syntactically via more flexible combinatory operations of function composition. The difference in the mode of complex verb formation has significant consequences for their syntactic structures and semantic interpretations, including different combination with the honorific morphemes and subjectivization of arguments\/adjuncts of base verbs. We also consider the case alternation phenomena and their implications for scope construals found in potential sentences, which can be accounted for in a unified manner in terms of the optional application of function composition.","label_nlp4sg":0,"task":null,"method":null,"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2007,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} -{"ID":"sigurd-gawronska-1994-modals","url":"https:\/\/aclanthology.org\/C94-1018","title":"Modals as a Problem for MT","abstract":"Tim paper demonstrates tim problem of translating modal verbs and phrases and shows how some of these problems can be overcome by choosing semantic representations which look like representations of passive verbs. These semantic representations suit alternative ways of expressing modality by e.g. passive constructions, adverbs and impersonal constructions in the target language. Various restructuring rules for English, Swe(lish and Russian am presented.","label_nlp4sg":0,"task":null,"method":null,"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":1994,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} -{"ID":"beckley-2015-bekli","url":"https:\/\/aclanthology.org\/W15-4312","title":"Bekli:A Simple Approach to Twitter Text Normalization.","abstract":"Every day, Twitter users generate vast quantities of potentially useful information in the form of written language. Due to Twitter's frequently informal tone, text normalization can be a crucial element for exploiting that information. This paper outlines our approach to text normalization used in the WNUT shared task. We show that a very simple solution, powered by a modestly sized, partiallycurated wordlist-combined with a modest reranking scheme-can deliver respectable results.","label_nlp4sg":0,"task":null,"method":null,"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2015,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} -{"ID":"chalkidis-etal-2021-paragraph","url":"https:\/\/aclanthology.org\/2021.naacl-main.22","title":"Paragraph-level Rationale Extraction through Regularization: A case study on European Court of Human Rights Cases","abstract":"Interpretability or explainability is an emerging research field in NLP. From a user-centric point of view, the goal is to build models that provide proper justification for their decisions, similar to those of humans, by requiring the models to satisfy additional constraints. To this end, we introduce a new application on legal text where, contrary to mainstream literature targeting word-level rationales, we conceive rationales as selected paragraphs in multi-paragraph structured court cases. We also release a new dataset comprising European Court of Human Rights cases, including annotations for paragraph-level rationales. We use this dataset to study the effect of already proposed rationale constraints, i.e., sparsity, continuity, and comprehensiveness, formulated as regularizers. Our findings indicate that some of these constraints are not beneficial in paragraph-level rationale extraction, while others need re-formulation to better handle the multi-label nature of the task we consider. We also introduce a new constraint, singularity, which further improves the quality of rationales, even compared with noisy rationale supervision. Experimental results indicate that the newly introduced task is very challenging and there is a large scope for further research.","label_nlp4sg":1,"task":null,"method":null,"goal1":"Peace, Justice and Strong Institutions","goal2":null,"goal3":null,"acknowledgments":"We would like to thank the anonymous reviewers (esp. reviewer #2) for their constructive detailed comments. Nikolaos Aletras is supported by EP-SRC grant EP\/V055712\/1, part of the European Commission CHIST-ERA programme, call 2019 XAI: Explainable Machine Learning-based Artificial Intelligence.","year":2021,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":1,"sdg17":0} -{"ID":"benko-2016-two","url":"https:\/\/aclanthology.org\/L16-1672","title":"Two Years of Aranea: Increasing Counts and Tuning the Pipeline","abstract":"The Aranea Project is targeted at creation of a family of Gigaword web-corpora for a dozen of languages that could be used for teaching language-and linguistics-related subjects at Slovak universities, as well as for research purposes in various areas of linguistics. All corpora are being built according to a standard methodology and using the same set of tools for processing and annotation, whichtogether with their standard size and-makes them also a valuable resource for translators and contrastive studies. All our corpora are freely available either via a web interface or in a source form in an annotated vertical format.","label_nlp4sg":0,"task":null,"method":null,"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"This research has been, in part, funded by the VEGA Grant Agency (Grant Number 2\/0015\/14).","year":2016,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} -{"ID":"cattoni-etal-2002-adam","url":"http:\/\/www.lrec-conf.org\/proceedings\/lrec2002\/pdf\/237.pdf","title":"ADAM: The SI-TAL Corpus of Annotated Dialogues","abstract":"In this paper we describe the methodological assumptions, general architectural framework and annotation and encoding practices underlying the ADAM Corpus, which has been developed as part of the Italian national project SI-TAL. Each of the 450 dialogues is represented by an orthographic transcription and is annotated at five levels of linguistic information, namely prosody, pos tagging, syntax, semantics, and pragmatics. A coherent, unitary approach to design and application of annotation schemes was pursued across all annotation levels. Particular attention was paid in developing the schemes in order to be consistent with criteria of robustness, wide coverage and compliance with existing standards. The evaluation of the annotation revealed a high degree of either inter-annotator agreement and annotation accuracy, with very promising results for what concerns the usability of the annotation schemes proposed and the accuracy of the annotation applied to the corpus. The ADAM Corpus also represents an interesting experiment at the architectural design level, as the way in which the annotation is organized and structured, as well as represented in a given physical format, aims at maximizing further reusability of the annotated material in terms of wide circulability of the corpus across different annotation practices and research purposes.","label_nlp4sg":0,"task":null,"method":null,"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2002,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} -{"ID":"stubbs-2011-mae","url":"https:\/\/aclanthology.org\/W11-0416","title":"MAE and MAI: Lightweight Annotation and Adjudication Tools","abstract":"MAE and MAI are lightweight annotation and adjudication tools for corpus creation. DTDs are used to define the annotation tags and attributes, including extent tags, link tags, and non-consuming tags. Both programs are written in Java and use a stand-alone SQLite database for storage and retrieval of annotation data. Output is in stand-off XML.","label_nlp4sg":0,"task":null,"method":null,"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"Funding for this project development was provided by NIH grant NIHR21LM009633-02, PI: James Pustejovsky Many thanks to the annotators who helped me identify bugs in the software, particularly Cornelia Parkes, Cheryl Keenan, BJ Harshfield, and all the students in the Brandeis University Spring 2011 Computer Science 216 class.","year":2011,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} -{"ID":"li-etal-2012-separately","url":"https:\/\/aclanthology.org\/C12-1103","title":"A Separately Passive-Aggressive Training Algorithm for Joint POS Tagging and Dependency Parsing","abstract":"Recent study shows that parsing accuracy can be largely improved by the joint optimization of part-of-speech (POS) tagging and dependency parsing. However, the POS tagging task does not benefit much from the joint framework. We argue that the fundamental reason behind is because the POS features are overwhelmed by the syntactic features during the joint optimization, and the joint models only prefer such POS tags that are favourable solely from the parsing viewpoint. To solve this issue, we propose a separately passive-aggressive learning algorithm (SPA), which is designed to separately update the POS features weights and the syntactic feature weights under the joint optimization framework. The proposed SPA is able to take advantage of previous joint optimization strategies to significantly improve the parsing accuracy, but also overcome their shortages to significantly boost the tagging accuracy by effectively solving the syntax-insensitive POS ambiguity issues. Experiments on the Chinese Penn Treebank 5.1 (CTB5) and the English Penn Treebank (PTB) demonstrate the effectiveness of our proposed methodology and empirically verify our observations as discussed above. We achieve the best tagging and parsing accuracies on both datasets, 94.60% in tagging accuracy and 81.67% in parsing accuracy on CTB5, and 97.62% and 93.52% on PTB.","label_nlp4sg":0,"task":null,"method":null,"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"We thank Meishan Zhang, for suggesting the easier way to incorporate the POS features during joint decoding, and the anonymous reviewers, for their valuable comments which lead to better understanding of the proposed SPA. This work was supported by National Natural Science Foundation of China (NSFC) via grant 61133012, the National \"863\" Major Projects via grant 2011AA01A207, and the National \"863\" Leading Technology Research Project via grant 2012AA011102.","year":2012,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} -{"ID":"dyvik-etal-2016-norgrambank","url":"https:\/\/aclanthology.org\/L16-1565","title":"NorGramBank: A `Deep' Treebank for Norwegian","abstract":"We present NorGramBank, a treebank for Norwegian with highly detailed LFG analyses. It is one of many treebanks made available through the INESS treebanking infrastructure. NorGramBank was constructed as a parsebank, i.e. by automatically parsing a corpus, using the wide coverage grammar NorGram. One part consisting of 350,000 words has been manually disambiguated using computer-generated discriminants. A larger part of 50 M words has been stochastically disambiguated. The treebank is dynamic: by global reparsing at certain intervals it is kept compatible with the latest versions of the grammar and the lexicon, which are continually further developed in interaction with the annotators. A powerful query language, INESS Search, has been developed for search across formalisms in the INESS treebanks, including LFG c-and f-structures. Evaluation shows that the grammar provides about 85% of randomly selected sentences with good analyses. Agreement among the annotators responsible for manual disambiguation is satisfactory, but also suggests desirable simplifications of the grammar.","label_nlp4sg":0,"task":null,"method":null,"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2016,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} -{"ID":"sap-etal-2020-commonsense","url":"https:\/\/aclanthology.org\/2020.acl-tutorials.7","title":"Commonsense Reasoning for Natural Language Processing","abstract":"\"bumping into people annoys them\" or \"rain makes the road slippery\", helps humans navigate everyday situations seamlessly (Apperly, 2010 ). Yet, endowing machines with such human-like commonsense reasoning capabilities has remained an elusive goal of artificial intelligence research for decades (Gunning, 2018) .\nCommonsense knowledge and reasoning have received renewed attention from the natural language processing (NLP) community in recent years, yielding multiple exploratory research directions into automated commonsense understanding. Recent efforts to acquire and represent common knowledge resulted in large knowledge graphs, acquired through extractive methods (Speer et al., 2017) or crowdsourcing (Sap et al., 2019a) . Simultaneously, a large body of work in integrating reasoning capabilities into downstream tasks has emerged, allowing the development of smarter dialogue and question answering agents .","label_nlp4sg":0,"task":null,"method":null,"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2020,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} -{"ID":"murata-etal-2001-using","url":"https:\/\/aclanthology.org\/W01-1415","title":"Using a Support-Vector Machine for Japanese-to-English Translation of Tense, Aspect, and Modality","abstract":null,"label_nlp4sg":0,"task":null,"method":null,"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2001,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} -{"ID":"crookston-1990-e","url":"https:\/\/aclanthology.org\/C90-2012","title":"The E-Framework: Emerging Problems","abstract":"Beth & Nygaard (1988) have described a formalism for NLP, the E-Framework (EFW). Two kinds of problem are emerging. Formally, there are problems with a complete formalisation of certain details of the EFW, but these will not be examined in this paper. Substantively, the question arises as to what mileage there is in this formalism tbr the MT problem. Possibly this question arises about any new NLP formalism, but Raw et al (1988) describe the EFW in an MT context. The EFW arose in reaction to the CAT forrealism for MT (Arnold & des Tombe (1987), Arnold et al (1986)). This was a sequential stratificational formalism in which each level of representation was policed by its own grammar. The essentials of this process can be diagrammed: (:) Grammar, Grammar.\/ I t generates generates Repni-t-grammar-* l%epnj *This research has been carried out within the British Group of the EUROTRA project, jointly funded by the Conunission of the European Colranunities and the United Khlgdom's Department of Trade and Industry. I an1 grateful for suggestions and comments from Doug Arnold, Lee Hmnphreya, Louisa Sadler, Andrew Way, and a COLING reviewer.","label_nlp4sg":0,"task":null,"method":null,"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":1990,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} -{"ID":"shlain-etal-2020-syntactic","url":"https:\/\/aclanthology.org\/2020.acl-demos.3","title":"Syntactic Search by Example","abstract":"We present a system that allows a user to search a large linguistically annotated corpus using syntactic patterns over dependency graphs. In contrast to previous attempts to this effect, we introduce a lightweight query language that does not require the user to know the details of the underlying syntactic representations, and instead to query the corpus by providing an example sentence coupled with simple markup. Search is performed at an interactive speed due to an efficient linguistic graphindexing and retrieval engine. This allows for rapid exploration, development and refinement of syntax-based queries. We demonstrate the system using queries over two corpora: the English wikipedia, and a collection of English pubmed abstracts. A demo of the wikipedia system is avilable at: https: \/\/allenai.github.io\/spike\/ .","label_nlp4sg":0,"task":null,"method":null,"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"We thank the team at LUM.ai and the University of Arizona, in particular Mihai Surdeanu, Marco Valenzuela-Esc\u00e1rcega, Gus Hahn-Powell and Dane Bell, for fruitful discussion and their work on the Odinson system. This project has received funding from the Europoean Research Council (ERC) under the Europoean Union's Horizon 2020 research and innovation programme, grant agreement No. 802774 (iEXTRACT).","year":2020,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} -{"ID":"honovich-etal-2022-true","url":"https:\/\/aclanthology.org\/2022.dialdoc-1.19","title":"TRUE: Re-evaluating Factual Consistency Evaluation","abstract":"Grounded text generation systems often generate text that contains factual inconsistencies, hindering their real-world applicability. Automatic factual consistency evaluation may help alleviate this limitation by accelerating evaluation cycles, filtering inconsistent outputs and augmenting training data. While attracting increasing attention, such evaluation metrics are usually developed and evaluated in silo for a single task or dataset, slowing their adoption. Moreover, previous meta-evaluation protocols focused on system-level correlations with human annotations, which leave the examplelevel accuracy of such metrics unclear. In this work, we introduce TRUE: a comprehensive study of factual consistency metrics on a standardized collection of existing texts from diverse tasks, manually annotated for factual consistency. Our standardization enables an example-level meta-evaluation protocol that is more actionable and interpretable than previously reported correlations, yielding clearer quality measures. Across diverse stateof-the-art metrics and 11 datasets we find that large-scale NLI and question generation-andanswering-based approaches achieve strong and complementary results. We recommend those methods as a starting point for model and metric developers, and hope TRUE will foster progress towards even better methods. 1 * Work done during an internship at Google Research. 1 Our code will be made publicly available. Summarization (Wang et al., 2020) Input Phyllis schlafly, a leading figure in the us conservative movement, has died at her home in missouri, aged 92... Summary Us conservative activist phyllis schlafly has died at the age of 87.","label_nlp4sg":1,"task":null,"method":null,"goal1":"Peace, Justice and Strong Institutions","goal2":null,"goal3":null,"acknowledgments":null,"year":2022,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":1,"sdg17":0} -{"ID":"lefever-hoste-2010-semeval","url":"https:\/\/aclanthology.org\/S10-1003","title":"SemEval-2010 Task 3: Cross-Lingual Word Sense Disambiguation","abstract":"The goal of this task is to evaluate the feasibility of multilingual WSD on a newly developed multilingual lexical sample data set. Participants were asked to automatically determine the contextually appropriate translation of a given English noun in five languages, viz. Dutch, German, Italian, Spanish and French. This paper reports on the sixteen submissions from the five different participating teams.","label_nlp4sg":0,"task":null,"method":null,"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2010,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} -{"ID":"liu-sarkar-2007-experimental","url":"https:\/\/aclanthology.org\/D07-1062","title":"Experimental Evaluation of LTAG-Based Features for Semantic Role Labeling","abstract":"This paper proposes the use of Lexicalized Tree-Adjoining Grammar (LTAG) formalism as an important additional source of features for the Semantic Role Labeling (SRL) task. Using a set of one-vs-all Support Vector Machines (SVMs), we evaluate these LTAG-based features. Our experiments show that LTAG-based features can improve SRL accuracy significantly. When compared with the best known set of features that are used in state of the art SRL systems we obtain an improvement in F-score from 82.34% to 85.25%.","label_nlp4sg":0,"task":null,"method":null,"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"This research was partially supported by NSERC, Canada (RGPIN: 264905). We would like to thank Aravind Joshi, Libin Shen, and the anonymous reviewers for their comments.","year":2007,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} -{"ID":"jalali-farahani-ghassem-sani-2021-bert","url":"https:\/\/aclanthology.org\/2021.ranlp-1.73","title":"BERT-PersNER: A New Model for Persian Named Entity Recognition","abstract":"Named entity recognition (NER) is one of the major tasks in natural language processing. A named entity is often a word or expression that bears a valuable piece of information, which can be effectively employed by some major NLP tasks such as machine translation, question answering, and text summarization. In this paper, we introduce a new model called BERT-PersNER (BERT based Persian Named Entity Recognizer), in which we have applied transfer learning and active learning approaches to NER in Persian, which is regarded as a low-resource language. Like many others, we have used Conditional Random Field for tag decoding in our proposed architecture. BERT-PersNER has outperformed two available studies in Persian NER, in most cases of our experiments using the supervised learning approach on two Persian datasets called Arman and Peyma. Besides, as the very first effort to try active learning in the Persian NER, using only 30% of Arman and 20% of Peyma, we respectively achieved 92.15%, and 92.41% performance of the mentioned supervised learning experiments.","label_nlp4sg":0,"task":null,"method":null,"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2021,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} -{"ID":"zaretskaya-2019-optimising","url":"https:\/\/aclanthology.org\/W19-8718","title":"Optimising the Machine Translation Post-editing Workflow","abstract":"As most large LSPs today, TransPerfect offers a variety of services based on machine translation (MT), including raw MT for casual low-cost translation, and different levels of MT postediting (MTPE). The volume of translations performed with MTPE in the company has been growing since 2016 and continues to grow to this date ( Figure 1 , the numbers on the Y axis have been omitted as commercially sensitive information), which means tens of millions of words post-edited each month. In order to implement MT at such a large scale, the process has to be as easy as possible for the users (Project Managers and translators), with minimal or no additional steps in the workflow.\nIn our case, MT is integrated in our translation management system, which makes it very easy to make the switch from purely human translation workflow to the post-editing workflow ( Figure 2 ). In this article we will share the methods we used to optimise the workflows when implementing MT, covering both the technical aspects and the processes involved. ","label_nlp4sg":0,"task":null,"method":null,"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2019,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} -{"ID":"luzzati-etal-2014-human","url":"http:\/\/www.lrec-conf.org\/proceedings\/lrec2014\/pdf\/771_Paper.pdf","title":"Human annotation of ASR error regions: Is ``gravity'' a sharable concept for human annotators?","abstract":"This paper is concerned with human assessments of the severity of errors in ASR outputs. We did not design any guidelines so that each annotator involved in the study could consider the \"seriousness\" of an ASR error using their own scientific background. Eight human annotators were involved in an annotation task on three distinct corpora, one of the corpora being annotated twice, hiding this annotation in duplicate to the annotators. None of the computed results (inter-annotator agreement, edit distance, majority annotation) allow any strong correlation between the considered criteria and the level of seriousness to be shown, which underlines the difficulty for a human to determine whether a ASR error is serious or not.","label_nlp4sg":0,"task":null,"method":null,"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"This work was supported by the French National Agency for Research as part of the project VERA (adVanced ERrors Analysis for speech recognition) under grants ANR-2012-BS02-006-04. We thank Dr Paul Del\u00e9glise, Dr Yannick Est\u00e8ve and Dr Olivier Galibert for their help in this work and their useful comments.","year":2014,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} -{"ID":"das-kannan-2014-discovering","url":"https:\/\/aclanthology.org\/C14-1082","title":"Discovering Topical Aspects in Microblogs","abstract":"We address the problem of discovering topical phrases or \"aspects\" from microblogging sites like Twitter, that correspond to key talking points or buzz around a particular topic or entity of interest. Inferring such topical aspects enables various applications such as trend detection and opinion mining for business analytics. However, mining high-volume microblog streams for aspects poses unique challenges due to the inherent noise, redundancy and ambiguity in users' social posts. We address these challenges by using a probabilistic model that incorporates various global and local indicators such as \"uniqueness\", \"diversity\" and \"burstiness\" of phrases, to infer relevant aspects. Our model is learned using an EM algorithm that uses automatically generated noisy labels, without requiring manual effort or domain knowledge. We present results on three months of Twitter data across different types of entities to validate our approach.","label_nlp4sg":1,"task":null,"method":null,"goal1":"Peace, Justice and Strong Institutions","goal2":null,"goal3":null,"acknowledgments":null,"year":2014,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":1,"sdg17":0} -{"ID":"dekhili-etal-2019-augmenting","url":"https:\/\/aclanthology.org\/W19-3644","title":"Augmenting Named Entity Recognition with Commonsense Knowledge","abstract":null,"label_nlp4sg":0,"task":null,"method":null,"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2019,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} -{"ID":"kebriaei-etal-2019-emad","url":"https:\/\/aclanthology.org\/S19-2107","title":"Emad at SemEval-2019 Task 6: Offensive Language Identification using Traditional Machine Learning and Deep Learning approaches","abstract":"In this paper, the used methods and the results obtained by our team, entitled Emad, on the OffensEval 2019 shared task organized at Se-mEval 2019 are presented. The OffensEval shared task includes three sub-tasks namely Offensive language identification, Automatic categorization of offense types and Offense target identification. We participated in subtask A and tried various methods including traditional machine learning methods, deep learning methods and also a combination of the first two sets of methods. We also proposed a data augmentation method using word embedding to improve the performance of our methods. The results show that the augmentation approach outperforms other methods in terms of macro-f1.","label_nlp4sg":1,"task":null,"method":null,"goal1":"Peace, Justice and Strong Institutions","goal2":null,"goal3":null,"acknowledgments":null,"year":2019,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":1,"sdg17":0} -{"ID":"van-noord-etal-1989-approach","url":"https:\/\/aclanthology.org\/E89-1040","title":"An Approach to Sentence-Level Anaphora in Machine Translation","abstract":"Theoretical research in the area of machine translation usually involves the search for and creation of an appropriate formalism. An important issue in this respect is the way in which the compositionality of translation is to be defined. In this paper, we will introduce the anaphoric component of the Mimo formalism. It makes the definition and translation of anaphoric relations possible, relations which are usually problematic for systems that adhere to strict compositionality. In iVlimo, the translation of anaphoric relations is compositional. The anaphoric component is used to define linguistic phenomena such as wh-movement, the passive and the binding of reflexives and pronouns monolingually. The actual working of the component will be shown in this paper by means of a detailed discussion of wh-movement.","label_nlp4sg":0,"task":null,"method":null,"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"The work we report here hscl its beginnings in work within the Eurotra framework. MiMo however is not \"the\" official Eurotra system. It differs in many critical respects from e.g Bech & Nygaard (1988) . MiMo is the result of the joint effort of Essex, Utrecht and Dominique Petitpierre from ISSCO, Geneve. The research reported in this paper was supported by the European Community, the DTI (Department of Trade and Industry) and the NBBI (Nederlands Bureau voor Bibliotheekwezen en Informatieverzorging). S Shieber, 1986: An introduction to unification based approaches to grammar. CSLI 1988.","year":1989,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} -{"ID":"oshima-2017-remarks","url":"https:\/\/aclanthology.org\/Y17-1025","title":"Remarks on epistemically biased questions","abstract":"Some varieties of polar interrogatives (polar questions) convey an epistemic bias toward a positive or negative answer. This work takes up three paradigmatic kinds of biased polar interrogatives: (i) positively-biased negative polar interrogatives, (ii) negatively-biased negative polar interrogatives, and (iii) rising taginterrogatives, and aims to supplement existing descriptions of what they convey besides asking a question. The novel claims are: (i) a positively-biased negative polar interrogative conveys that the speaker assumes that the core proposition is likely to be something that is or should be activated in the hearer's mind, (ii) the bias induced by a negatively-biased negative polar interrogative makes reference to the speaker's assumptions about the hearer's beliefs, and (iii) the biases associated with the three constructions differ in strength, the one of the rising tag-interrogative being the strongest.","label_nlp4sg":0,"task":null,"method":null,"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"Many thanks to David Beaver, John Beavers, Michael Everdell, Daniel Lassiter, Maribel Romero, Yasutada Sudo, and Stephen Wechsler for helpful comments and discussions. This work was supported by JSPS KAKENHI Grant Number 15K02476.","year":2017,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} -{"ID":"poswiata-perelkiewicz-2022-opi","url":"https:\/\/aclanthology.org\/2022.ltedi-1.40","title":"OPI@LT-EDI-ACL2022: Detecting Signs of Depression from Social Media Text using RoBERTa Pre-trained Language Models","abstract":"This paper presents our winning solution for the Shared Task on Detecting Signs of Depression from Social Media Text at LT-EDI-ACL2022. The task was to create a system that, given social media posts in English, should detect the level of depression as 'not depressed', 'moderately depressed' or 'severely depressed'. We based our solution on transformer-based language models. We fine-tuned selected models: BERT, RoBERTa, XLNet, of which the best results were obtained for RoBERTa large. Then, using the prepared corpus, we trained our own language model called DepRoBERTa (RoBERTa for Depression Detection). Fine-tuning of this model improved the results. The third solution was to use the ensemble averaging, which turned out to be the best solution. It achieved a macro-averaged F1-score of 0.583. The source code of prepared solution is available at https:\/\/github.com\/rafalposwiata\/depressiondetection-lt-edi-2022.","label_nlp4sg":1,"task":null,"method":null,"goal1":"Good Health and Well-Being","goal2":null,"goal3":null,"acknowledgments":null,"year":2022,"sdg1":0,"sdg2":0,"sdg3":1,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} -{"ID":"wilks-etal-2010-demonstration","url":"https:\/\/aclanthology.org\/P10-4013","title":"Demonstration of a Prototype for a Conversational Companion for Reminiscing about Images","abstract":"This paper describes an initial prototype demonstrator of a Companion, designed as a platform for novel approaches to the following: 1) The use of Information Extraction (IE) techniques to extract the content of incoming dialogue utterances after an Automatic Speech Recognition (ASR) phase, 2) The conversion of the input to Resource Descriptor Format (RDF) to allow the generation of new facts from existing ones, under the control of a Dialogue Manger (DM), that also has access to stored knowledge and to open knowledge accessed in real time from the web, all in RDF form, 3) A DM implemented as a stack and network virtual machine that models mixed initiative in dialogue control, and 4) A tuned dialogue act detector based on corpus evidence. The prototype platform was evaluated, and we describe this briefly; it is also designed to support more extensive forms of emotion detection carried by both speech and lexical content, as well as extended forms of machine learning.","label_nlp4sg":0,"task":null,"method":null,"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"This work was funded by the Companions project (2006)(2007)(2008)(2009) sponsored by the European Commission as part of the Information Society Technologies (IST) programme under EC grant number IST-FP6-034434.","year":2010,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} -{"ID":"tan-etal-2013-learning","url":"https:\/\/aclanthology.org\/P13-2016","title":"Learning to Order Natural Language Texts","abstract":"Ordering texts is an important task for many NLP applications. Most previous works on summary sentence ordering rely on the contextual information (e.g. adjacent sentences) of each sentence in the source document. In this paper, we investigate a more challenging task of ordering a set of unordered sentences without any contextual information. We introduce a set of features to characterize the order and coherence of natural language texts, and use the learning to rank technique to determine the order of any two sentences. We also propose to use the genetic algorithm to determine the total order of all sentences. Evaluation results on a news corpus show the effectiveness of our proposed method.","label_nlp4sg":0,"task":null,"method":null,"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"The work was supported by NSFC (61170166), Beijing Nova Program (2008B03) and National High-Tech R&D Program (2012AA011101).","year":2013,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} -{"ID":"debbarma-etal-2012-morphological","url":"https:\/\/aclanthology.org\/W12-5004","title":"Morphological Analyzer for Kokborok","abstract":"Morphological analysis is concerned with retrieving the syntactic and morphological properties or the meaning of a morphologically complex word. Morphological analysis retrieves the grammatical features and properties of an inflected word. However, this paper introduces the design and implementation of a Morphological Analyzer for Kokborok, a resource constrained and less computerized Indian language. A database driven affix stripping algorithm has been used to design the Morphological Analyzer. It analyzes the Kokborok word forms and produces several grammatical information associated with the words. The Morphological Analyzer for Kokborok has been tested on 56732 Kokborok words; thereby an accuracy of 80% has been obtained on a manual check.","label_nlp4sg":0,"task":null,"method":null,"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2012,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} -{"ID":"sasaki-etal-2008-event","url":"https:\/\/aclanthology.org\/C08-1096","title":"Event Frame Extraction Based on a Gene Regulation Corpus","abstract":"This paper describes the supervised acquisition of semantic event frames based on a corpus of biomedical abstracts, in which the biological process of E. coli gene regulation has been linguistically annotated by a group of biologists in the EC research project \"BOOTStrep\". Gene regulation is one of the rapidly advancing areas for which information extraction could boost research. Event frames are an essential linguistic resource for extraction of information from biological literature. This paper presents a specification for linguistic-level annotation of gene regulation events, followed by novel methods of automatic event frame extraction from text. The event frame extraction performance has been evaluated with 10fold cross validation. The experimental results show that a precision of nearly 50% and a recall of around 20% are achieved. Since the goal of this paper is event frame extraction, rather than event instance extraction, the issue of low recall could be solved by applying the methods to a larger-scale corpus. 1 Introduction This paper describes the automatic extraction of linguistic event frames based on a corpus of MEDLINE abstracts that has been annotated with gene regulation events by a group of do","label_nlp4sg":1,"task":null,"method":null,"goal1":"Good Health and Well-Being","goal2":null,"goal3":null,"acknowledgments":null,"year":2008,"sdg1":0,"sdg2":0,"sdg3":1,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} -{"ID":"elming-habash-2007-combination","url":"https:\/\/aclanthology.org\/N07-2007","title":"Combination of Statistical Word Alignments Based on Multiple Preprocessing Schemes","abstract":"We present an approach to using multiple preprocessing schemes to improve statistical word alignments. We show a relative reduction of alignment error rate of about 38%.","label_nlp4sg":0,"task":null,"method":null,"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2007,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} -{"ID":"fuchs-acriche-2022-product","url":"https:\/\/aclanthology.org\/2022.ecnlp-1.12","title":"Product Titles-to-Attributes As a Text-to-Text Task","abstract":"Online marketplaces use attribute-value pairs, such as brand, size, size type, color, etc. to help define important and relevant facts about a listing. These help buyers to curate their search results using attribute filtering and overall create a richer experience. Although their critical importance for listings' discoverability, getting sellers to input tens of different attribute-value pairs per listing is costly and often results in missing information. This can later translate to the unnecessary removal of relevant listings from the search results when buyers are filtering by attribute values. In this paper we demonstrate using a Text-to-Text hierarchical multilabel ranking model framework to predict the most relevant attributes per listing, along with their expected values, using historic user behavioral data. This solution helps sellers by allowing them to focus on verifying information on attributes that are likely to be used by buyers, and thus, increase the expected recall for their listings. Specifically for eBay's case we show that using this model can improve the relevancy of the attribute extraction process by 33.2% compared to the current highlyoptimized production system. Apart from the empirical contribution, the highly generalized nature of the framework presented in this paper makes it relevant for many high-volume search-driven websites.","label_nlp4sg":0,"task":null,"method":null,"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2022,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} -{"ID":"rosario-hearst-2005-multi","url":"https:\/\/aclanthology.org\/H05-1092","title":"Multi-way Relation Classification: Application to Protein-Protein Interactions","abstract":"We address the problem of multi-way relation classification, applied to identification of the interactions between proteins in bioscience text. A major impediment to such work is the acquisition of appropriately labeled training data; for our experiments we have identified a database that serves as a proxy for training data. We use two graphical models and a neural net for the classification of the interactions, achieving an accuracy of 64% for a 10-way distinction between relation types. We also provide evidence that the exploitation of the sentences surrounding a citation to a paper can yield higher accuracy than other sentences.","label_nlp4sg":1,"task":null,"method":null,"goal1":"Good Health and Well-Being","goal2":null,"goal3":null,"acknowledgments":"Acknowledgments. We thank Janice Hamer for her help in labeling examples and other biological insights. This research was supported by a grant from NSF DBI-0317510 and a gift from Genentech.","year":2005,"sdg1":0,"sdg2":0,"sdg3":1,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} -{"ID":"hu-etal-2021-one","url":"https:\/\/aclanthology.org\/2021.eacl-main.296","title":"One-class Text Classification with Multi-modal Deep Support Vector Data Description","abstract":"This work presents multi-modal deep SVDD (mSVDD) for one-class text classification. By extending the uni-modal SVDD to a multiple modal one, we build mSVDD with multiple hyperspheres, that enable us to build a much better description for target one-class data. Additionally, the end-to-end architecture of mSVDD can jointly handle neural feature learning and one-class text learning. We also introduce a mechanism for incorporating negative supervision in the absence of real negative data, which can be beneficial to the mSVDD model. We conduct experiments on Reuters and 20 Newsgroup datasets, and the experimental results demonstrate that mSVDD outperforms uni-modal SVDD and mSVDD can get further improvements when negative supervision is incorporated.","label_nlp4sg":0,"task":null,"method":null,"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"The authors would like to gratefully acknowledge the anonymous reviewers for their helpful comments and suggestions. Chenlong Hu acknowledges the support from China Scholarship Council( CSC ).","year":2021,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} -{"ID":"hardalov-etal-2021-cross","url":"https:\/\/aclanthology.org\/2021.emnlp-main.710","title":"Cross-Domain Label-Adaptive Stance Detection","abstract":"Stance detection concerns the classification of a writer's viewpoint towards a target. There are different task variants, e.g., stance of a tweet vs. a full article, or stance with respect to a claim vs. an (implicit) topic. Moreover, task definitions vary, which includes the label inventory, the data collection, and the annotation protocol. All these aspects hinder cross-domain studies, as they require changes to standard domain adaptation approaches. In this paper, we perform an in-depth analysis of 16 stance detection datasets, and we explore the possibility for cross-domain learning from them. Moreover, we propose an end-to-end unsupervised framework for outof-domain prediction of unseen, user-defined labels. In particular, we combine domain adaptation techniques such as mixture of experts and domain-adversarial training with label embeddings, and we demonstrate sizable performance gains over strong baselines, both (i) indomain, i.e., for seen targets, and (ii) out-ofdomain, i.e., for unseen targets. Finally, we perform an exhaustive analysis of the crossdomain results, and we highlight the important factors influencing the model performance.","label_nlp4sg":1,"task":null,"method":null,"goal1":"Peace, Justice and Strong Institutions","goal2":null,"goal3":null,"acknowledgments":"We thank the anonymous reviewers for their helpful questions and comments, which have helped us improve the quality of the paper.We also would like to thank Guillaume Bouchard for the useful feedback. Finally, we thank the authors of the stance datasets for open-sourcing and providing us with their data.poledb We used the domains Healthcare, Guns, Gay Rights and God for training, Abortion for development, and Creation for testing.rumor We used the airfrance rumour for our test set, and we split the remaining data in ratio 9:1 for training and development, respectively.wtwt We used DIS_FOXA operation for testing, AET_HUM for development, and the rest for training. To standardize the targets, we rewrote them as sentences, i.e., company X acquires company Y.scd We used a split with Marijuana for development, Obama for testing, and the rest for training.semeval2016t6 We split it to increase the size of the development set.snopes We adjusted the splits for compatibility with the stance setup. We further extracted and converted the rumours and their evidence into targetcontext pairs.","year":2021,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":1,"sdg17":0} -{"ID":"cvrcek-etal-2012-legal","url":"http:\/\/www.lrec-conf.org\/proceedings\/lrec2012\/pdf\/775_Paper.pdf","title":"Legal electronic dictionary for Czech","abstract":"In the paper the results of the project of Czech Legal Electronic dictionary (PES) are presented. During the 4 year project the large legal terminological dictionary of Czech was created in the form of the electronic lexical database enriched with a hierarchical ontology of legal terms. It contains approx. 10,000 entries-legal terms together with their ontological relations and hypertext references. In the second part of the project the web interface based on the platform DEBII has been designed and implemented that allows users to browse and search effectively the database. At the same time the Czech Dictionary of Legal Terms will be generated from the database and later printed as a book. Inter-annotator's agreement in manual selection of legal terms was high-approx. 95 %.","label_nlp4sg":1,"task":null,"method":null,"goal1":"Peace, Justice and Strong Institutions","goal2":null,"goal3":null,"acknowledgments":null,"year":2012,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":1,"sdg17":0} -{"ID":"miller-1999-lexical","url":"https:\/\/aclanthology.org\/P99-1003","title":"The Lexical Component of Natural Language Processing","abstract":"Computational linguistics is generally considered to be the branch of engineering that uses computers to do useful things with linguistic signals, but it can also be viewed as an extended test of computational theories of human cognition; it is this latter perspective that psychologists find most interesting. Language provides a critical test for the hypothesis that physical symbol systems are adequate to perform all human cognitive functions. As yet, no adequate system for natural language processing has approached human levels of performance. Of the various problems that natural language processing has revealed, polysemy is probably the most frustrating. People deal with polysemy so easily that potential abiguities are overlooked, whereas computers must work hard to do far less well. A linguistic approach generally involves a parser, a lexicon, and some ad hoc rules for using linguistic context to identify the context-appropriate sense. A statistical approach generally involves the use of word co-occurrence statistics to create a semantic hyperspace where each word, regardless of its polysemy, is represented as a single vector. Each approach has strengths and limitations; some combination is often proposed. Various possibilities will be discussed in terms of their psychological plausibility.","label_nlp4sg":0,"task":null,"method":null,"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":1999,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} -{"ID":"battu-etal-2018-predicting","url":"https:\/\/aclanthology.org\/Y18-1007","title":"Predicting the Genre and Rating of a Movie Based on its Synopsis","abstract":"Movies are one of the most prominent means of entertainment. The widespread use of the Internet in recent times has led to large volumes of data related to movies being generated and shared online. People often prefer to express their views online in English as compared to other local languages. This leaves us with a very little amount of data in languages apart from English to work on. To overcome this, we created the Multi-Language Movie Review Dataset (MLMRD). The dataset consists of genre, rating, and synopsis of a movie across multiple languages, namely Hindi, Telugu, Tamil, Malayalam, Korean, French, and Japanese. The genre of a movie can be identified by its synopsis. Though the rating of a movie may depend on multiple factors like the performance of actors, screenplay, direction etc but in most of the cases, synopsis plays a crucial role in the movie rating. In this work, we provide various model architectures that can be used to predict the genre and the rating of a movie across various languages present in our dataset based on the synopsis.","label_nlp4sg":0,"task":null,"method":null,"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2018,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} +{"ID":"adriaens-1989-parallel","url":"https:\/\/aclanthology.org\/W89-0232","title":"The Parallel Expert Parser: A Meaning-Oriented, Lexically-Guided, Parallel-Interactive Model of Natural Language Understanding","abstract":"International Parsing Workshop '89","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":1989,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} +{"ID":"bandyopadhyay-etal-2021-university","url":"https:\/\/aclanthology.org\/2021.wmt-1.46","title":"The University of Maryland, College Park Submission to Large-Scale Multilingual Shared Task at WMT 2021","abstract":"This paper describes the system submitted to Large-Scale Multilingual Shared Task (Small Task #2) at WMT 2021. It is based on the massively multilingual open-source model FLO-RES101_MM100 model, with selective finetuning. Our best-performing system reported a 15.72 average BLEU score for the task.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2021,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} +{"ID":"zerva-ananiadou-2015-event","url":"https:\/\/aclanthology.org\/W15-3804","title":"Event Extraction in pieces:Tackling the partial event identification problem on unseen corpora","abstract":"Biomedical event extraction systems have the potential to provide a reliable means of enhancing knowledge resources and mining the scientific literature. However, to achieve this goal, it is necessary that current event extraction models are improved, such that they can be applied confidently to unseen data with a minimal rate of error. Motivated by this requirement, this work targets a particular type of error, namely partial events, where an event is missing one or more arguments. Specifically, we attempt to improve the performance of a state-of-the-art event extraction tool, EventMine, when applied to a new cancer pathway curation corpus. We propose a post-processing ranking approach based on relaxed constraints, in order to reconsider the candidate arguments for each event trigger, and suggest possible new arguments. The proposed methodology, applicable to the output of any event extraction system, achieves an improvement in argument recall of 2%-4% when applied to EventMine output, and thus constitutes a promising direction for further developments.","label_nlp4sg":1,"task":[],"method":[],"goal1":"Good Health and Well-Being","goal2":null,"goal3":null,"acknowledgments":"This work was supported by the DARPA funded Big Mechanism Project, as well as by the EPSRC funded Centre for Doctoral Training in Computer Science scholarship. We would like to thank Dr. Riza Theresa Batista-Navarro and Dr. Ioannis Korkontzelos for the useful discussions and feedback at critical points. Finally, we would like to thank our referees for their constructive input.","year":2015,"sdg1":0,"sdg2":0,"sdg3":1,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} +{"ID":"clarke-2009-context","url":"https:\/\/aclanthology.org\/W09-0215","title":"Context-theoretic Semantics for Natural Language: an Overview","abstract":"We present the context-theoretic framework, which provides a set of rules for the nature of composition of meaning based on the philosophy of meaning as context. Principally, in the framework the composition of the meaning of words can be represented as multiplication of their representative vectors, where multiplication is distributive with respect to the vector space. We discuss the applicability of the framework to a range of techniques in natural language processing, including subsequence matching, the lexical entailment model of Dagan et al. (2005), vector-based representations of taxonomies, statistical parsing and the representation of uncertainty in logical semantics.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"I am very grateful to my supervisor David Weir for all his help in the development of these ideas, and to Rudi Lutz and the anonymous reviewers for many useful comments and suggestions.","year":2009,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} +{"ID":"yamakoshi-etal-2021-evaluation","url":"https:\/\/aclanthology.org\/2021.wat-1.12","title":"Evaluation Scheme of Focal Translation for Japanese Partially Amended Statutes","abstract":"For updating the translations of Japanese statutes based on their amendments, we need to consider the translation \"focality;\" that is, we should only modify expressions that are relevant to the amendment and retain the others to avoid misconstruing its contents. In this paper, we introduce an evaluation metric and a corpus to improve focality evaluations. Our metric is called an Inclusive Score for DIfferential Translation: (ISDIT). ISDIT consists of two factors: (1) the n-gram recall of expressions unaffected by the amendment and (2) the n-gram precision of the output compared to the reference. This metric supersedes an existing one for focality by simultaneously calculating the translation quality of the changed expressions in addition to that of the unchanged expressions. We also newly compile a corpus for Japanese partially amendment translation that secures the focality of the post-amendment translations, while an existing evaluation corpus does not. With the metric and the corpus, we examine the performance of existing translation methods for Japanese partially amendment translations.","label_nlp4sg":1,"task":[],"method":[],"goal1":"Decent Work and Economic Growth","goal2":"Partnership for the goals","goal3":"Peace, Justice and Strong Institutions","acknowledgments":"This work was partly supported by JSPS KAK-ENHI Grant Number 18H03492 and 21H03772.","year":2021,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":1,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":1,"sdg17":1} +{"ID":"kalpakchi-boye-2021-bert","url":"https:\/\/aclanthology.org\/2021.inlg-1.43","title":"BERT-based distractor generation for Swedish reading comprehension questions using a small-scale dataset","abstract":"An important part when constructing multiplechoice questions (MCQs) for reading comprehension assessment are the distractors, the incorrect but preferably plausible answer options. In this paper, we present a new BERTbased method for automatically generating distractors using only a small-scale dataset. We also release a new such dataset of Swedish MCQs (used for training the model), and propose a methodology for assessing the generated distractors. Evaluation shows that from a student's perspective, our method generated one or more plausible distractors for more than 50% of the MCQs in our test set. From a teacher's perspective, about 50% of the generated distractors were deemed appropriate. We also do a thorough analysis of the results.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"This work was supported by Vinnova (Sweden's Innovation Agency) within project 2019-02997. We would like to thank the anonymous reviewers for their comments, as well as Gabriel Skantze and Bram Willemsen for their helpful feedback prior to the submission of the paper.","year":2021,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} +{"ID":"lee-etal-2018-character","url":"https:\/\/aclanthology.org\/C18-1273","title":"Character-Level Feature Extraction with Densely Connected Networks","abstract":"Generating character-level features is an important step for achieving good results in various natural language processing tasks. To alleviate the need for human labor in generating hand-crafted features, methods that utilize neural architectures such as Convolutional Neural Network (CNN) or Recurrent Neural Network (RNN) to automatically extract such features have been proposed and have shown great results. However, CNN generates position-independent features, and RNN is slow since it needs to process the characters sequentially. In this paper, we propose a novel method of using a densely connected network to automatically extract character-level features. The proposed method does not require any language or task specific assumptions, and shows robustness and effectiveness while being faster than CNN-or RNN-based methods. Evaluating this method on three sequence labeling tasks-slot tagging, Part-of-Speech (POS) tagging, and Named-Entity Recognition (NER)-we obtain state-of-the-art performance with a 96.62 F1-score and 97.73% accuracy on slot tagging and POS tagging, respectively, and comparable performance to the state-of-the-art 91.13 F1-score on NER.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2018,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} +{"ID":"ananthakrishnan-etal-2010-semi","url":"https:\/\/aclanthology.org\/W10-2916","title":"A Semi-Supervised Batch-Mode Active Learning Strategy for Improved Statistical Machine Translation","abstract":"The availability of substantial, in-domain parallel corpora is critical for the development of high-performance statistical machine translation (SMT) systems. Such corpora, however, are expensive to produce due to the labor intensive nature of manual translation. We propose to alleviate this problem with a novel, semisupervised, batch-mode active learning strategy that attempts to maximize indomain coverage by selecting sentences, which represent a balance between domain match, translation difficulty, and batch diversity. Simulation experiments on an English-to-Pashto translation task show that the proposed strategy not only outperforms the random selection baseline, but also traditional active learning techniques based on dissimilarity to existing training data. Our approach achieves a relative improvement of 45.9% in BLEU over the seed baseline, while the closest competitor gained only 24.8% with the same number of selected sentences.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2010,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} +{"ID":"zhao-grishman-2005-extracting","url":"https:\/\/aclanthology.org\/P05-1052","title":"Extracting Relations with Integrated Information Using Kernel Methods","abstract":"Entity relation detection is a form of information extraction that finds predefined relations between pairs of entities in text. This paper describes a relation detection approach that combines clues from different levels of syntactic processing using kernel methods. Information from three different levels of processing is considered: tokenization, sentence parsing and deep dependency analysis. Each source of information is represented by kernel functions. Then composite kernels are developed to integrate and extend individual kernels so that processing errors occurring at one level can be overcome by information from other levels. We present an evaluation of these methods on the 2004 ACE relation detection task, using Support Vector Machines, and show that each level of syntactic processing contributes useful information for this task. When evaluated on the official test data, our approach produced very competitive ACE value scores. We also compare the SVM with KNN on different kernels.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"This research was supported in part by the Defense Advanced Research Projects Agency under Grant N66001-04-1-8920 from SPAWAR San Diego, and by the National Science Foundation under Grant ITS-0325657. This paper does not necessarily reflect the position of the U.S. Government. We wish to thank Adam Meyers of the NYU NLP group for his help in producing deep dependency analyses.","year":2005,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} +{"ID":"aloraini-etal-2020-neural","url":"https:\/\/aclanthology.org\/2020.crac-1.11","title":"Neural Coreference Resolution for Arabic","abstract":"No neural coreference resolver for Arabic exists, in fact we are not aware of any learning-based coreference resolver for Arabic since (Bj\u00f6rkelund and Kuhn, 2014). In this paper, we introduce a coreference resolution system for Arabic based on Lee et al's end-to-end architecture combined with the Arabic version of bert and an external mention detector. As far as we know, this is the first neural coreference resolution system aimed specifically to Arabic, and it substantially outperforms the existing state-of-the-art on OntoNotes 5.0 with a gain of 15.2 points conll F1. We also discuss the current limitations of the task for Arabic and possible approaches that can tackle these challenges.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"This research was supported in part by the DALI project, ERC Grant 695662, in part by the Human Rights in the Era of Big Data and Technology (HRBDT) project, ESRC grant ES\/M010236\/1.","year":2020,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} +{"ID":"jager-etal-2017-using","url":"https:\/\/aclanthology.org\/E17-1113","title":"Using support vector machines and state-of-the-art algorithms for phonetic alignment to identify cognates in multi-lingual wordlists","abstract":"Most current approaches in phylogenetic linguistics require as input multilingual word lists partitioned into sets of etymologically related words (cognates). Cognate identification is so far done manually by experts, which is time consuming and as of yet only available for a small number of well-studied language families. Automatizing this step will greatly expand the empirical scope of phylogenetic methods in linguistics, as raw wordlists (in phonetic transcription) are much easier to obtain than wordlists in which cognate words have been fully identified and annotated, even for under-studied languages. A couple of different methods have been proposed in the past, but they are either disappointing regarding their performance or not applicable to larger datasets. Here we present a new approach that uses support vector machines to unify different state-of-the-art methods for phonetic alignment and cognate detection within a single framework. Training and evaluating these method on a typologically broad collection of gold-standard data shows it to be superior to the existing state of the art.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"This research was supported by the ERC Advanced Grant 324246 EVOLAEMP (GJ, PS), the DFG-KFG 2237 Words, Bones, Genes, Tools (GJ),","year":2017,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} +{"ID":"chang-2020-taiwan","url":"https:\/\/aclanthology.org\/2020.rocling-1.38","title":"The Taiwan Biographical Database (TBDB): An Introduction","abstract":"In the future, we will continue to increase both the quality and quantity of the database and also develop new analysis tools.\nThis speech introduces the development of a text retrieval and mining system for Taiwanese historical people --Taiwan Biographical Database (TBDB). It describes the characteristics of personages in TBDB, highlights the system architecture and preliminary achievement of TBDB. Finally, this talk elaborates on the lessons learned through the creation of TBDB, and the future plans.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2020,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} +{"ID":"pitenis-etal-2020-offensive","url":"https:\/\/aclanthology.org\/2020.lrec-1.629","title":"Offensive Language Identification in Greek","abstract":"As offensive language has become a rising issue for online communities and social media platforms, researchers have been investigating ways of coping with abusive content and developing systems to detect its different types: cyberbullying, hate speech, aggression, etc. With a few notable exceptions, most research on this topic so far has dealt with English. This is mostly due to the availability of language resources for English. To address this shortcoming, this paper presents the first Greek annotated dataset for offensive language identification: the Offensive Greek Tweet Dataset (OGTD). OGTD is a manually annotated dataset containing 4,779 posts from Twitter annotated as offensive and not offensive. Along with a detailed description of the dataset, we evaluate several computational models trained and tested on this data.","label_nlp4sg":1,"task":[],"method":[],"goal1":"Peace, Justice and Strong Institutions","goal2":null,"goal3":null,"acknowledgments":"We would like to acknowledge Maria, Raphael and Anastasia, the team of volunteer annotators that provided their free time and efforts to help us produce v1.0 of the dataset of Greek tweets for offensive language detection, as well as Fotini and that helped review tweets with ambivalent labels. Additionally, we would like to express our sincere gratitude to the LightTag team and especially to Tal Perry for granting us free use for their annotation platform.","year":2020,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":1,"sdg17":0} +{"ID":"xu-etal-2021-syntax","url":"https:\/\/aclanthology.org\/2021.acl-long.420","title":"Syntax-Enhanced Pre-trained Model","abstract":"We study the problem of leveraging the syntactic structure of text to enhance pre-trained models such as BERT and RoBERTa. Existing methods utilize syntax of text either in the pre-training stage or in the fine-tuning stage, so that they suffer from discrepancy between the two stages. Such a problem would lead to the necessity of having human-annotated syntactic information, which limits the application of existing methods to broader scenarios. To address this, we present a model that utilizes the syntax of text in both pre-training and fine-tuning stages. Our model is based on Transformer with a syntax-aware attention layer that considers the dependency tree of the text. We further introduce a new pre-training task of predicting the syntactic distance among tokens in the dependency tree. We evaluate the model on three downstream tasks, including relation classification, entity typing, and question answering. Results show that our model achieves state-of-the-art performance on six public benchmark datasets. We have two major findings. First, we demonstrate that infusing automatically produced syntax of text improves pre-trained models. Second, global syntactic distances among tokens bring larger performance gains compared to local head relations between contiguous tokens. 1 * Work is done during internship at Microsoft. \u2020 For questions, please contact D. Tang and Z. Xu.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"We are grateful to Yeyun Gong, Ruize Wang ","year":2021,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} +{"ID":"aloraini-etal-2020-qmul","url":"https:\/\/aclanthology.org\/2020.wanlp-1.31","title":"The QMUL\/HRBDT contribution to the NADI Arabic Dialect Identification Shared Task","abstract":"We present the Arabic dialect identification system that we used for the country-level subtask of the NADI challenge. Our model consists of three components: BiLSTM-CNN, character-level TF-IDF, and topic modeling features. We represent each tweet using these features and feed them into a deep neural network. We then add an effective heuristic that improves the overall performance. We achieved an F1-Macro score of 20.77% and an accuracy of 34.32% on the test set. The model was also evaluated on the Arabic Online Commentary dataset, achieving results better than the state-of-the-art.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"The research was in part supported by the UK Economic and Social Research Council (ESRC) through the Big Data Human Rights and Technology project (grant number ES\/M010236\/1).","year":2020,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} +{"ID":"van-de-cruys-villada-moiron-2007-semantics","url":"https:\/\/aclanthology.org\/W07-1104","title":"Semantics-based Multiword Expression Extraction","abstract":"This paper describes a fully unsupervised and automated method for large-scale extraction of multiword expressions (MWEs) from large corpora. The method aims at capturing the non-compositionality of MWEs; the intuition is that a noun within a MWE cannot easily be replaced by a semantically similar noun. To implement this intuition, a noun clustering is automatically extracted (using distributional similarity measures), which gives us clusters of semantically related nouns. Next, a number of statistical measures-based on selectional preferences-is developed that formalize the intuition of non-compositionality. Our approach has been tested on Dutch, and automatically evaluated using Dutch lexical resources.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"This research was carried out as part of the research program IRME STEVIN project. We would also like to thank Gertjan van Noord and the two anonymous reviewers for their helpful comments on an earlier version of this paper.","year":2007,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} +{"ID":"wang-etal-2019-youmakeup","url":"https:\/\/aclanthology.org\/D19-1517","title":"YouMakeup: A Large-Scale Domain-Specific Multimodal Dataset for Fine-Grained Semantic Comprehension","abstract":"Multimodal semantic comprehension has attracted increasing research interests in recent years, such as visual question answering and caption generation. However, due to the data limitation, fine-grained semantic comprehension which requires to capture semantic details of multimodal contents has not been well investigated. In this work, we introduce \"YouMakeup\", a large-scale multimodal instructional video dataset to support finegrained semantic comprehension research in specific domain. YouMakeup contains 2,800 videos from YouTube, spanning more than 420 hours in total. Each video is annotated with a sequence of natural language descriptions for instructional steps, grounded in temporal video range and spatial facial areas. The annotated steps in a video involve subtle difference in actions, products and regions, which require fine-grained understanding and reasoning both temporally and spatially. In order to evaluate models' ability for fined-grained comprehension, we further propose two groups of tasks including generation tasks and visual question answering tasks from different aspects. We also establish a baseline of step caption generation for future comparison. The dataset will be publicly available at https:\/\/ github.com\/AIM3-RUC\/YouMakeup to support research investigation in fine-grained semantic comprehension.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"This work was supported by National Natural Science Foundation of China (No. 61772535), Beijing Natural Science Foundation (No. 4192028), and National Key Research and Development Plan (No. 2016YFB1001202). We would like to thank our group member Jingjun Liang for his help in building the annotation website and all the annotators for their careful annotations.","year":2019,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} +{"ID":"pettersson-etal-2013-normalisation","url":"https:\/\/aclanthology.org\/W13-5617","title":"Normalisation of Historical Text Using Context-Sensitive Weighted Levenshtein Distance and Compound Splitting","abstract":"Natural language processing for historical text imposes a variety of challenges, such as to deal with a high degree of spelling variation. Furthermore, there is often not enough linguistically annotated data available for training part-of-speech taggers and other tools aimed at handling this specific kind of text. In this paper we present a Levenshtein-based approach to normalisation of historical text to a modern spelling. This enables us to apply standard NLP tools trained on contemporary corpora on the normalised version of the historical input text. In its basic version, no annotated historical data is needed, since the only data used for the Levenshtein comparisons are a contemporary dictionary or corpus. In addition, a (small) corpus of manually normalised historical text can optionally be included to learn normalisation for frequent words and weights for edit operations in a supervised fashion, which improves precision. We show that this method is successful both in terms of normalisation accuracy, and by the performance of a standard modern tagger applied to the historical text. We also compare our method to a previously implemented approach using a set of handwritten normalisation rules, and we see that the Levenshtein-based approach clearly outperforms the hand-crafted rules. Furthermore, the experiments were carried out on Swedish data with promising results and we believe that our method could be successfully applicable to analyse historical text for other languages, including those with less resources.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2013,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} +{"ID":"farajian-etal-2017-multi","url":"https:\/\/aclanthology.org\/W17-4713","title":"Multi-Domain Neural Machine Translation through Unsupervised Adaptation","abstract":"We investigate the application of Neural Machine Translation (NMT) under the following three conditions posed by realworld application scenarios. First, we operate with an input stream of sentences coming from many different domains and with no predefined order. Second, the sentences are presented without domain information. Third, the input stream should be processed by a single generic NMT model. To tackle the weaknesses of current NMT technology in this unsupervised multi-domain setting, we explore an efficient instance-based adaptation method that, by exploiting the similarity between the training instances and each test sentence, dynamically sets the hyperparameters of the learning algorithm and updates the generic model on-the-fly. The results of our experiments with multi-domain data show that local adaptation outperforms not only the original generic NMT system, but also a strong phrase-based system and even single-domain NMT models specifically optimized on each domain and applicable only by violating two of our aforementioned assumptions.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"This work has been partially supported by the ECfunded H2020 projects QT21 (grant no. 645452) and ModernMT (grant no. 645487).","year":2017,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} +{"ID":"lin-2008-stochastic","url":"https:\/\/aclanthology.org\/I08-4007","title":"Stochastic Dependency Parsing Based on A* Admissible Search","abstract":"Dependency parsing has gained attention in natural language understanding because the representation of dependency tree is simple, compact and direct such that robust partial understanding and task portability can be achieved more easily. However, many dependency parsers make hard decisions with local information while selecting among the next parse states. As a consequence, though the obtained dependency trees are good in some sense, the N-best output is not guaranteed to be globally optimal in general. In this paper, a stochastic dependency parsing scheme based on A* admissible search is formally presented. By well representing the parse state and appropriately designing the cost and heuristic functions, dependency parsing can be modeled as an A* search problem, and solved with a generic algorithm of state space search. When evaluated on the Chinese Tree Bank, this parser can obtain 85.99% dependency accuracy at 68.39% sentence accuracy, and 14.62% node ratio for dynamic heuristic. This parser can output N-best dependency trees, and integrate the semantic processing into the search process easily.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2008,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} +{"ID":"deoskar-etal-2011-learning","url":"https:\/\/aclanthology.org\/W11-2911","title":"Learning Structural Dependencies of Words in the Zipfian Tail","abstract":"Using semi-supervised EM, we learn finegrained but sparse lexical parameters of a generative parsing model (a PCFG) initially estimated over the Penn Treebank. Our lexical parameters employ supertags, which encode complex structural information at the pre-terminal level, and are particularly sparse in labeled data-our goal is to learn these for words that are unseen or rare in the labeled data. In order to guide estimation from unlabeled data, we incorporate both structural and lexical priors from the labeled data. We get a large error reduction in parsing ambiguous structures associated with unseen verbs, the most important case of learning lexico-structural dependencies. We also obtain a statistically significant improvement in labeled bracketing score of the treebank PCFG, the first successful improvement via semi-supervised EM of a generative structured model already trained over large labeled data.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"We thank Alexandra Birch, Mark Steedman, and three anonymous reviewers for detailed comments and suggestions. This research was supported by the VIDI grant 639.022.604 from The Netherlands Organisation for Scientific Research (NWO). The first author was further supported by the ERC Advanced Fellowship 249520 GRAMPLUS.","year":2011,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} +{"ID":"basili-etal-1992-computational","url":"https:\/\/aclanthology.org\/A92-1013","title":"Computational Lexicons: the Neat Examples and the Odd Exemplars","abstract":"When implementing computational lexicons it is important to keep in mind the texts that a NLP system must deal with. Words relate to each other in many different, often queer, ways: this information is rarely found in dictionaries, and it is quite hard to be invented a priori, despite the imagination that linguists exhibit at inventing esoteric examples. In this paper we present the results of an experiment in learning from corpora the frequent selectional restrictions holding between content words. The method is based on the analysis of word associations augmented with syntactic markers and semantic tags. Word pairs are extracted by a morphosyntactic analyzer and clustered according to their semantic tags. A statistical measure is applied to the data to evaluate the significance of a detected relation. Clustered association data render the study of word associations more interesting with several respects: data are more reliable even for smaller corpora, more easy to interpret, and have many practical applications in NLP.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":1992,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} +{"ID":"yeh-lee-1992-lexicon","url":"https:\/\/aclanthology.org\/O92-1006","title":"A Lexicon-Driven Analysis Of Chinese Serial Verb Constructions","abstract":null,"label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":1992,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} +{"ID":"lee-1995-unified","url":"https:\/\/aclanthology.org\/Y95-1037","title":"A Unified Account of Polarity Phenomena","abstract":"This paper argues, in an attempt at a unified account of negative polarity and free choice phenomena expressed by amu \/any or wh-indefinites in Korean, English, Chinese, and Japanese that the notion of concession by arbitrary or d isjunctive choice (based on indefiniteness) is crucial. With this central notion all the apparently diverse polarityrelated phenomena can be explained consistently, not just described in terms of distribution. With strong negatives and affective licensors, their negative force is so substantial that concessive force need not be reinforced and the licensed NPIs reveal existential force. With free choice and generic-like items, licensed by modals, weakly negative in their natrue of uncertainty\/irrealis, concessive force is reinforced and emphasized and the whole category denoted by the given Noun is reached in the process of concession by arbitrariy choice of its members on quantificational scale, giving the impression of universal force. The logical consequences of monotone decreasingness are transparent with strong negatives but less so with weaker ones.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":1995,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} +{"ID":"radev-2000-common","url":"https:\/\/aclanthology.org\/W00-1009","title":"A Common Theory of Information Fusion from Multiple Text Sources Step One: Cross-Document Structure","abstract":"We introduce CST (cross-document slructure theory), a paradigm for multidocument analysis. CST takes into aceount the rhetorical structure of clusters of related textual documents. We present a taxonomy of cross-document relationships. We argue that CST can be the basis for multidocument summarization guided by user preferences for summary length, information provenance, cross-source agreement, and chronological ordering of facts.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2000,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} +{"ID":"velupillai-2014-temporal","url":"https:\/\/aclanthology.org\/W14-3413","title":"Temporal Expressions in Swedish Medical Text -- A Pilot Study","abstract":"One of the most important features of health care is to be able to follow a patient's progress over time and identify events in a temporal order. We describe initial steps in creating resources for automatic temporal reasoning of Swedish medical text. As a first step, we focus on the identification of temporal expressions by exploiting existing resources and systems available for English. We adapt the HeidelTime system and manually evaluate its performance on a small subset of Swedish intensive care unit documents. On this subset, the adapted version of Hei-delTime achieves a precision of 92% and a recall of 66%. We also extract the most frequent temporal expressions from a separate, larger subset, and note that most expressions concern parts of days or specific times. We intend to further develop resources for temporal reasoning of Swedish medical text by creating a gold standard corpus also annotated with events and temporal links, in addition to temporal expressions and their normalised values.","label_nlp4sg":1,"task":[],"method":[],"goal1":"Good Health and Well-Being","goal2":null,"goal3":null,"acknowledgments":"The author wishes to thank the anonymous reviewers for invaluable comments on this manuscript. Thanks also to Danielle Mowery and Dr. Wendy Chapman for all their support. This work was partially funded by Swedish Research Council (350-2012-6658) and Swedish Fulbright Commission.","year":2014,"sdg1":0,"sdg2":0,"sdg3":1,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} +{"ID":"herzog-1969-computational","url":"https:\/\/aclanthology.org\/C69-6215","title":"Computational Studies in Terminology","abstract":"(Abstract of a Paper to be Presented at the 1969 International Congress on Computational Linguistics, SAnga S~by, Sweden) Terminology, as a field of applied linguistics, is gaining increasing importance, since in recent years striking new developments of technology and the sciences have taken place. Terminologists have their own international congresses; linguists and standard associations try to build up and control the specific vocabularies of all different fields, in order to have them compiled and printed in up-to-date dictionaries. Industry also shows remarkable interest in this work, because those great international companies heavily depend on the means of a fixed and standardized vocabulary in order to achieve the necessary communication (to go along with its products), either by publication or by translation.\nFor various reasons, the task of documenting and controlling the growth and structure of terminological vocabularies cannot satisfactorily be accomplished without the application of computers. Insight into the structure of terminologies has been gained by functional~ computer prepared statistics of vocabularies and validations of texts. Linguists, for their part, have programmed computers in order to isolate relevant lexical items from terminological texts~ as well as to determine the various meanings and shades of meaning of specific terms~ by means of special procedures.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":1969,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} +{"ID":"bommadi-etal-2021-automatic","url":"https:\/\/aclanthology.org\/2021.dialdoc-1.4","title":"Automatic Learning Assistant in Telugu","abstract":"This paper presents a learning assistant that tests one's knowledge and gives feedback that helps a person learn at a faster pace. A learning assistant (based on an automated question generation) has extensive uses in education, information websites, self-assessment, FAQs, testing ML agents, research, etc. Multiple researchers, and companies have worked on Virtual Assistance, but majorly in English. We built our learning assistant for Telugu language to help with teaching in the mother tongue, which is the most efficient way of learning 1. Our system is built primarily based on Question Generation in Telugu.","label_nlp4sg":1,"task":[],"method":[],"goal1":"Quality Education","goal2":null,"goal3":null,"acknowledgments":null,"year":2021,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":1,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} +{"ID":"soler-wanner-2016-semi","url":"https:\/\/aclanthology.org\/L16-1204","title":"A Semi-Supervised Approach for Gender Identification","abstract":"In most of the research studies on Author Profiling, large quantities of correctly labeled data are used to train the models. However, this does not reflect the reality in forensic scenarios: in practical linguistic forensic investigations, the resources that are available to profile the author of a text are usually scarce. To pay tribute to this fact, we implemented a Semi-Supervised Learning variant of the k nearest neighbors algorithm that uses small sets of labeled data and a larger amount of unlabeled data to classify the authors of texts by gender (man vs woman). We describe the enriched KNN algorithm and show that the use of unlabeled instances improves the accuracy of our gender identification model. We also present a feature set that facilitates the use of a very small number of instances, reaching accuracies higher than 70% with only 113 instances to train the model. It is also shown that the algorithm performs equally well using publicly available data.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"The presentation of this work was partially supported by the ICT PhD program of Universitat Pompeu Fabra through a travel grant.","year":2016,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} +{"ID":"kokkinakis-thurin-2007-identification","url":"https:\/\/aclanthology.org\/W07-2452","title":"Identification of Entity References in Hospital Discharge Letters","abstract":"In the era of the Electronic Health Record the release of medical narrative textual data for research, for health care statistics, for monitoring of new diagnostic tests and for tracking disease outbreak alerts imposes tough restrictions by various public authority bodies for the protection of (patient) privacy. In this paper we present a system for automatic identification of named entities in Swedish clinical free text, in the form of discharge letters, by applying generic named entity recognition technology with minor adaptations.","label_nlp4sg":1,"task":[],"method":[],"goal1":"Good Health and Well-Being","goal2":null,"goal3":null,"acknowledgments":"This work has been partially supported by the \"Semantic Interoperability and Data Mining in Biomedicine\" -NoE, under EU's Framework 6.","year":2007,"sdg1":0,"sdg2":0,"sdg3":1,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} +{"ID":"bernardy-chatzikyriakidis-2021-applied","url":"https:\/\/aclanthology.org\/2021.iwcs-1.2","title":"Applied Temporal Analysis: A Complete Run of the FraCaS Test Suite","abstract":"In this paper, we propose an implementation of temporal semantics that translates syntax trees to logical formulas, suitable for consumption by the Coq proof assistant. The analysis supports a wide range of phenomena including: temporal references, temporal adverbs, aspectual classes and progressives. The new semantics are built on top of a previous system handling all sections of the FraCaS test suite except the temporal reference section, and we obtain an accuracy of 81 percent overall and 73 percent for the problems explicitly marked as related to temporal reference. To the best of our knowledge, this is the best performance of a logical system on the whole of the FraCaS.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"The research reported in this paper was supported by grant 2014-39 from the Swedish Research Council, which funds the Centre for Linguistic Theory and Studies in Probability (CLASP) in the Department of Philosophy, Linguistics, and Theory of Science at the University of Gothenburg. We are grateful to our colleagues in CLASP for helpful discussion of some of the ideas presented here. We also thank anonymous reviewers for their useful comments on an earlier draft of the paper.","year":2021,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} +{"ID":"dwi-prasetyo-etal-2015-impact","url":"https:\/\/aclanthology.org\/W15-2607","title":"On the Impact of Twitter-based Health Campaigns: A Cross-Country Analysis of Movember","abstract":"Health campaigns that aim to raise awareness and subsequently raise funds for research and treatment are commonplace. While many local campaigns exist, very few attract the attention of a global audience. One of those global campaigns is Movember, an annual campaign during the month of November, that is directed at men's health with special foci on cancer & mental health. Health campaigns routinely use social media portals to capture people's attention. Recently, researchers began to consider to what extent social media is effective in raising the awareness of health campaigns. In this paper we expand on those works by conducting an investigation across four different countries, while not only restricting ourselves to the impact on awareness but also on fund-raising. To that end, we analyze the 2013 Movember Twitter campaigns in Canada, Australia, the United Kingdom and the United States.","label_nlp4sg":1,"task":[],"method":[],"goal1":"Good Health and Well-Being","goal2":null,"goal3":null,"acknowledgments":"This research was funded in part by the 3TU Federation and the Dutch national projects COMMIT and FACT. We are grateful to Twitter and Movember for providing the data.","year":2015,"sdg1":0,"sdg2":0,"sdg3":1,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} +{"ID":"kim-etal-2020-multi","url":"https:\/\/aclanthology.org\/2020.coling-main.153","title":"Multi-Task Learning for Knowledge Graph Completion with Pre-trained Language Models","abstract":"As research on utilizing human knowledge in natural language processing has attracted considerable attention in recent years, knowledge graph (KG) completion has come into the spotlight. Recently, a new knowledge graph completion method using a pre-trained language model, such as KG-BERT, was presented and showed high performance. However, its scores in ranking metrics such as Hits@k are still behind state-of-the-art models. We claim that there are two main reasons: 1) failure in sufficiently learning relational information in knowledge graphs, and 2) difficulty in picking out the correct answer from lexically similar candidates. In this paper, we propose an effective multi-task learning method to overcome the limitations of previous works. By combining relation prediction and relevance ranking tasks with our target link prediction, the proposed model can learn more relational properties in KGs and properly perform even when lexical similarity occurs. Experimental results show that we not only largely improve the ranking performances compared to KG-BERT but also achieve the state-of-the-art performances in Mean Rank and Hits@10 on the WN18RR dataset.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2020,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} +{"ID":"ahrendt-demberg-2016-improving","url":"https:\/\/aclanthology.org\/N16-1067","title":"Improving event prediction by representing script participants","abstract":"Automatically learning script knowledge has proved difficult, with previous work not or just barely beating a most-frequent baseline. Script knowledge is a type of world knowledge which can however be useful for various task in NLP and psycholinguistic modelling. We here propose a model that includes participant information (i.e., knowledge about which participants are relevant for a script) and show, on the Dinners from Hell corpus as well as the InScript corpus, that this knowledge helps us to significantly improve prediction performance on the narrative cloze task.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"This research was funded by the German Research Foundation (DFG) as part of SFB 1102 'Information Density and Linguistic Encoding' and the Cluster of Excellence 'Multimodal Computing and Interaction' (EXC 284).","year":2016,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} +{"ID":"isonuma-etal-2020-tree","url":"https:\/\/aclanthology.org\/2020.acl-main.73","title":"Tree-Structured Neural Topic Model","abstract":"This paper presents a tree-structured neural topic model, which has a topic distribution over a tree with an infinite number of branches. Our model parameterizes an unbounded ancestral and fraternal topic distribution by applying doubly-recurrent neural networks. With the help of autoencoding variational Bayes, our model improves data scalability and achieves competitive performance when inducing latent topics and tree structures, as compared to a prior tree-structured topic model (Blei et al., 2010). This work extends the tree-structured topic model such that it can be incorporated with neural models for downstream tasks.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"We would like to thank anonymous reviewers for their valuable feedback. This work was supported by JST ACT-X Grant Number JPMJAX1904 and CREST Grant Number JPMJCR1513, Japan.","year":2020,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} +{"ID":"rich-etal-2018-modeling","url":"https:\/\/aclanthology.org\/W18-0526","title":"Modeling Second-Language Learning from a Psychological Perspective","abstract":"Psychological research on learning and memory has tended to emphasize small-scale laboratory studies. However, large datasets of people using educational software provide opportunities to explore these issues from a new perspective. In this paper we describe our approach to the Duolingo Second Language Acquisition Modeling (SLAM) competition which was run in early 2018. We used a well-known class of algorithms (gradient boosted decision trees), with features partially informed by theories from the psychological literature. After detailing our modeling approach and a number of supplementary simulations, we reflect on the degree to which psychological theory aided the model, and the potential for cognitive science and predictive modeling competitions to gain from each other.","label_nlp4sg":1,"task":[],"method":[],"goal1":"Quality Education","goal2":null,"goal3":null,"acknowledgments":"This research was supported by NSF grant DRL-1631436 and BCS-1255538, and the John S. Mc-Donnell Foundation Scholar Award to TMG. We thank Shannon Tubridy and Tal Yarkoni for helpful suggestions in the development of this work.","year":2018,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":1,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} +{"ID":"stoyanova-etal-2013-wordnet","url":"https:\/\/aclanthology.org\/W13-2417","title":"Wordnet-Based Cross-Language Identification of Semantic Relations","abstract":"We propose a method for cross-language identification of semantic relations based on word similarity measurement and morphosemantic relations in WordNet. We transfer these relations to pairs of derivationally unrelated words and train a model for automatic classification of new instances of (morpho)semantic relations in context based on the existing ones and the general semantic classes of collocated verb and noun senses. Our experiments are based on Bulgarian-English parallel and comparable texts but the method is to a great extent language-independent and particularly suited to less-resourced languages, since it does not need parsed or semantically annotated data. The application of the method leads to an increase in the number of discovered semantic relations by 58.35% and performs relatively consistently, with a small decrease in precision between the baseline (based on morphosemantic relations identified in wordnet)-0.774, and the extended method (based on the data obtained through machine learning)-0.721.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2013,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} +{"ID":"jin-de-marneffe-2015-overall","url":"https:\/\/aclanthology.org\/D15-1132","title":"The Overall Markedness of Discourse Relations","abstract":"Discourse relations can be categorized as continuous or discontinuous in the hypothesis of continuity (Murray, 1997), with continuous relations expressing normal succession of events in discourse such as temporal, spatial or causal. Asr and Demberg (2013) propose a markedness measure to test the prediction that discontinuous relations may have more unambiguous connectives, but restrict the markedness calculation to relations with explicit connectives only. This paper extends their measure to explicit and implicit relations and shows that results from this extension better fit the continuity hypothesis predictions both for the English Penn Discourse (Prasad et al., 2008) and the Chinese Discourse (Zhou and Xue, 2015) Treebanks.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"We thank William Schuler for productive discussions of the work presented here as well as our anonymous reviewers for their helpful comments.","year":2015,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} +{"ID":"ma-etal-2019-essentia","url":"https:\/\/aclanthology.org\/D19-5307","title":"Essentia: Mining Domain-specific Paraphrases with Word-Alignment Graphs","abstract":"Paraphrases are important linguistic resources for a wide variety of NLP applications. Many techniques for automatic paraphrase mining from general corpora have been proposed. While these techniques are successful at discovering generic paraphrases, they often fail to identify domain-specific paraphrases (e.g., \"staff \", \"concierge\" in the hospitality domain). This is because current techniques are often based on statistical methods, while domain-specific corpora are too small to fit statistical methods. In this paper, we present an unsupervised graph-based technique to mine paraphrases from a small set of sentences that roughly share the same topic or intent. Our system, ESSENTIA, relies on word-alignment techniques to create a word-alignment graph that merges and organizes tokens from input sentences. The resulting graph is then used to generate candidate paraphrases. We demonstrate that our system obtains high quality paraphrases, as evaluated by crowd workers. We further show that the majority of the identified paraphrases are domain-specific and thus complement existing paraphrase databases.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2019,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} +{"ID":"tanvir-etal-2021-estbert","url":"https:\/\/aclanthology.org\/2021.nodalida-main.2","title":"EstBERT: A Pretrained Language-Specific BERT for Estonian","abstract":"This paper presents EstBERT, a large pretrained transformer-based language-specific BERT model for Estonian. Recent work has evaluated multilingual BERT models on Estonian tasks and found them to outperform the baselines. Still, based on existing studies on other languages, a language-specific BERT model is expected to improve over the multilingual ones. We first describe the EstBERT pretraining process and then present the models' results based on the finetuned EstBERT for multiple NLP tasks, including POS and morphological tagging, dependency parsing, named entity recognition and text classification. The evaluation results show that the models based on EstBERT outperform multilingual BERT models on five tasks out of seven, providing further evidence towards a view that training language-specific BERT models are still useful, even when multilingual models are available. 1","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2021,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} +{"ID":"kallmeyer-yoon-2004-tree","url":"https:\/\/aclanthology.org\/2004.jeptalnrecital-long.24","title":"Tree-local MCTAG with Shared Nodes: An Analysis ofWord Order Variation in German and Korean","abstract":"Lexicalized Tree Adjoining Grammars (LTAG, (Joshi & Schabes, 1997) ) is a tree-rewriting formalism. An LTAG consists of a finite set of trees (elementary trees) associated with lexical items. Larger trees are derived by substitution (replacing a leaf with a new tree) and adjunction (replacing an internal node with a new tree). In case of an adjunction, the new elementary tree has a special leaf node, the foot node (marked with an asterisk). When adjoining such a tree (a so-called auxiliary tree) to a node \u00b5, in the resulting tree, the subtree with root node \u00b5 from the old tree is put below the foot node of the new auxiliary tree. Non-auxiliary elementary trees are called initial trees. LTAG elementary trees represent extended projections of lexical items and encapsulate all syntactic arguments of the lexical anchor. They are minimal in the sense that only the arguments of the anchor are encapsulated, all recursion is factored away.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2004,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} +{"ID":"imamura-sumita-2020-transformer","url":"https:\/\/aclanthology.org\/2020.wat-1.3","title":"Transformer-based Double-token Bidirectional Autoregressive Decoding in Neural Machine Translation","abstract":"This paper presents a simple method that extends a standard Transformer-based autoregressive decoder, to speed up decoding. The proposed method generates a token from the head and tail of a sentence (two tokens in total) in each step. By simultaneously generating multiple tokens that rarely depend on each other, the decoding speed is increased while the degradation in translation quality is minimized. In our experiments, the proposed method increased the translation speed by around 113%-155% in comparison with a standard autoregressive decoder, while degrading the BLEU scores by no more than 1.03. It was faster than an iterative nonautoregressive decoder in many conditions.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2020,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} +{"ID":"brugman-etal-2004-collaborative","url":"http:\/\/www.lrec-conf.org\/proceedings\/lrec2004\/pdf\/473.pdf","title":"Collaborative Annotation of Sign Language Data with Peer-to-Peer Technology","abstract":"Collaboration on annotation projects is in practice mostly done by people sharing the same room. However, several models for online cooperative annotation over the internet are possible. This paper explores and evaluates these, and reports on the use of peer-to-peer technology to extend a multimedia annotation tool (ELAN) with functions that support collaborative annotation.","label_nlp4sg":1,"task":[],"method":[],"goal1":"Reduced Inequalities","goal2":null,"goal3":null,"acknowledgments":null,"year":2004,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":1,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} +{"ID":"bhat-etal-2017-joining","url":"https:\/\/aclanthology.org\/E17-2052","title":"Joining Hands: Exploiting Monolingual Treebanks for Parsing of Code-mixing Data","abstract":"In this paper, we propose efficient and less resource-intensive strategies for parsing of code-mixed data. These strategies are not constrained by in-domain annotations, rather they leverage pre-existing monolingual annotated resources for training. We show that these methods can produce significantly better results as compared to an informed baseline. Besides, we also present a data set of 450 Hindi and English code-mixed tweets of Hindi multilingual speakers for evaluation. The data set is manually annotated with Universal Dependencies.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2017,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} +{"ID":"chen-huang-2009-step","url":"https:\/\/aclanthology.org\/Y09-1001","title":"A Step toward Compositional Semantics: E-HowNet a Lexical Semantic Representation System","abstract":"The purpose of designing the lexical semantic representation model E-HowNet is for natural language understanding. E-HowNet is a frame-based entity-relation model extended from HowNet to define lexical senses and achieving compositional semantics. The followings are major extension features of E-HowNet to achieve the goal. a) Word senses (concepts) are defined by either primitives or any well-defined concepts and conceptual relations; b) A uniform sense representation model for content words, function words and phrases; c) Semantic relations are explicitly expressed; and d) Near-canonical representations for lexical senses and phrasal senses. We demonstrate the above features and show how coarse-grained semantic composition can be carried out under the framework of E-HowNet. Possible applications of E-HowNet are also suggested. We hope that the ultimate goal of natural language understanding will be accomplished after future improvement and evolution of the current E-HowNet.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2009,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} +{"ID":"sasano-korhonen-2020-investigating","url":"https:\/\/aclanthology.org\/2020.acl-main.337","title":"Investigating Word-Class Distributions in Word Vector Spaces","abstract":"This paper presents an investigation on the distribution of word vectors belonging to a certain word class in a pre-trained word vector space. To this end, we made several assumptions about the distribution, modeled the distribution accordingly, and validated each assumption by comparing the goodness of each model. Specifically, we considered two types of word classes-the semantic class of direct objects of a verb and the semantic class in a thesaurus-and tried to build models that properly estimate how likely it is that a word in the vector space is a member of a given word class. Our results on selectional preference and WordNet datasets show that the centroid-based model will fail to achieve good enough performance, the geometry of the distribution and the existence of subgroups will have limited impact, and also the negative instances need to be considered for adequate modeling of the distribution. We further investigated the relationship between the scores calculated by each model and the degree of membership and found that discriminative learning-based models are best in finding the boundaries of a class, while models based on the offset between positive and negative instances perform best in determining the degree of membership.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"This work was supported by JSPS KAKENHI Grant Number 16K16110 and 18H03286.","year":2020,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} +{"ID":"ogiso-etal-2012-unidic","url":"http:\/\/www.lrec-conf.org\/proceedings\/lrec2012\/pdf\/906_Paper.pdf","title":"UniDic for Early Middle Japanese: a Dictionary for Morphological Analysis of Classical Japanese","abstract":"In order to construct an annotated diachronic corpus of Japanese, we propose to create a new dictionary for morphological analysis of Early Middle Japanese (Classical Japanese) based on UniDic, a dictionary for Contemporary Japanese. Differences between the Early Middle Japanese and Contemporary Japanese, which prevent a na\u00efve adaptation of UniDic to Early Middle Japanese, are found at the levels of lexicon, morphology, grammar, orthography and pronunciation. In order to overcome these problems, we extended dictionary entries and created a training corpus of Early Middle Japanese to adapt UniDic for Contemporary Japanese to Early Middle Japanese. Experimental results show that the proposed UniDic-EMJ, a new dictionary for Early Middle Japanese, achieves as high accuracy (97%) as needed for the linguistic research on lexicon and grammar in Japanese classical text analysis.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"This work is partially supported by the collaborative research project \"Study of the history of the Japanese language using statistics and machine-learning\" carried out at the National Institute for Japanese Language and Linguistics.","year":2012,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} +{"ID":"mcclelland-1987-parallel","url":"https:\/\/aclanthology.org\/T87-1016","title":"Parallel Distributed Processing and Role Assignment Constraints","abstract":"My work in natural language processing is based on the premise that it is not in general possible to recover the underlying representations of sentences wilhout considering semantic constraints on their possible case structures, it seems clear that people use these constraints to do several things: To assign constituents to the proper ease roles and attach then to the proper other constituents. To assign the appropriate reading to a word or larger constituent when it occurs in context. To assign default values to missing constituents. To instantiate the concepts referenced by the words in a sentence so that they fit the context. I believe that parallel-distributed processing models (i.e., conneelionist models which make use of distributed representations) provide the mechanisms that are needed for these lasks. Argument altachments and role assignments seem to require a consideration of the relative merits of competing possibilities (Marcus, 1980; Bates and MacWhinney, 1987; MaeWhinney, 1987), as deles lexical dlsambigualion. Conuectionist models provide a very natural substrate for these kinds of competition processes (Cottrell, 1985; Wallz and Pollack, 1985).","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":1987,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} +{"ID":"lucy-bamman-2021-gender","url":"https:\/\/aclanthology.org\/2021.nuse-1.5","title":"Gender and Representation Bias in GPT-3 Generated Stories","abstract":"Using topic modeling and lexicon-based word similarity, we find that stories generated by GPT-3 exhibit many known gender stereotypes. Generated stories depict different topics and descriptions depending on GPT-3's perceived gender of the character in a prompt, with feminine characters 1 more likely to be associated with family and appearance, and described as less powerful than masculine characters, even when associated with high power verbs in a prompt. Our study raises questions on how one can avoid unintended social biases when using large language models for storytelling.","label_nlp4sg":1,"task":[],"method":[],"goal1":"Gender Equality","goal2":"Reduced Inequalities","goal3":null,"acknowledgments":"We thank Nicholas Tomlin, Julia Mendelsohn, and Emma Lurie for their helpful feedback on earlier versions of this paper. This work was supported by funding from the National Science Foundation (Graduate Research Fellowship DGE-1752814 and grant IIS-1942591).","year":2021,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":1,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":1,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} +{"ID":"zhang-etal-2016-learning","url":"https:\/\/aclanthology.org\/P16-1169","title":"Learning Concept Taxonomies from Multi-modal Data","abstract":"We study the problem of automatically building hypernym taxonomies from textual and visual data. Previous works in taxonomy induction generally ignore the increasingly prominent visual data, which encode important perceptual semantics. Instead, we propose a probabilistic model for taxonomy induction by jointly leveraging text and images. To avoid hand-crafted feature engineering, we design end-to-end features based on distributed representations of images and words. The model is discriminatively trained given a small set of existing ontologies and is capable of building full taxonomies from scratch for a collection of unseen conceptual label items with associated images. We evaluate our model and features on the WordNet hierarchies, where our system outperforms previous approaches by a large gap.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"We would like to thank anonymous reviewers for their valuable feedback. We would also like to thank Mohit Bansal for helpful suggestions. We thank NVIDIA for GPU donations. The work is supported by NSF Big Data IIS1447676.","year":2016,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} +{"ID":"bergsma-etal-2020-creating","url":"https:\/\/aclanthology.org\/2020.gamnlp-1.1","title":"Creating a Sentiment Lexicon with Game-Specific Words for Analyzing NPC Dialogue in The Elder Scrolls V: Skyrim","abstract":"A weak point of rule-based sentiment analysis systems is that the underlying sentiment lexicons are often not adapted to the domain of the text we want to analyze. We created a game-specific sentiment lexicon for video game Skyrim based on the E-ANEW word list and a dataset of Skyrim's in-game documents. We calculated sentiment ratings for NPC dialogue using both our lexicon and E-ANEW and compared the resulting sentiment ratings to those of human raters. Both lexicons perform comparably well on our evaluation dialogues, but the game-specific extension performs slightly better on the dominance dimension for dialogue segments and the arousal dimension for full dialogues. To our knowledge, this is the first time that a sentiment analysis lexicon has been adapted to the video game domain.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"This research is partially supported by the Netherlands Organisation for Scientific Research (NWO) via the DATA2GAME project (project number 055.16.114).","year":2020,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} +{"ID":"hoffman-1993-formal","url":"https:\/\/aclanthology.org\/P93-1045","title":"The Formal Consequences of Using Variables in CCG Categories","abstract":"Combinatory Categorial Grammars, CCGs, (Steedman 1985) have been shown by Weir and loshi (1988) to generate the same class of languages as Tree-Adjoining Grammars (TAG), Head Grammars (HG), and Linear Indexed Grammars (LIG). In this paper, I will discuss the effect of using variables in lexical category assignments in CCGs. It will be shown that using variables in lexical categories can increase the weak generative capacity of CCGs beyond the class of grammars listed above.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":1993,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} +{"ID":"hahn-wermter-2004-pumping","url":"http:\/\/www.lrec-conf.org\/proceedings\/lrec2004\/pdf\/641.pdf","title":"Pumping Documents Through a Domain and Genre Classification Pipeline","abstract":"We propose a simple, yet effective, pipeline architecture for document classification. The task we intend to solve is to classify large and content-wise heterogeneous document streams on a layered nine-category system, which distinguishes medical from non-medical texts and sorts medical texts into various subgenres. While the document classification problem is often dealt with using computationally powerful and, hence, costly classifiers (e.g., Bayesian ones), we have gathered empirical evidence that a much simpler approach based on n-gram-statistics achieves a comparable level of classification performance.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"Acknowledgements. This work was supported by Deutsche Forschungsgemeinschaft (DFG), grant KL 640\/5-1, and by the Faculty of Medicine at Freiburg University, grant KLA231\/03.","year":2004,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} +{"ID":"litvinova-etal-2017-deception","url":"https:\/\/aclanthology.org\/E17-4005","title":"Deception detection in Russian texts","abstract":"Psychology studies show that people detect deception no more accurately than by chance, and it is therefore important to develop tools to enable the detection of deception. The problem of deception detection has been studied for a significant amount of time, however in the last 10-15 years we have seen methods of computational linguistics being employed with greater frequency. Texts are processed using different NLP tools and then classified as deceptive\/truthful using modern machine learning methods. While most of this research has been performed for the English language, Slavic languages have never been the focus of detection deception studies. This paper deals with deception detection in Russian narratives related to the theme \"How I Spent Yesterday\". It employs a specially designed corpus of truthful and deceptive texts on the same topic from each respondent, such that N = 113. The texts were processed using Linguistic Inquiry and Word Count software that is used in most studies of text-based deception detection. The average amount of parameters, a majority of which were related to Part-of-Speech, lexical-semantic group, and other frequencies. Using standard statistical analysis, statistically significant differences between false and truthful Russian texts was uncovered. On the basis of the chosen parameters our classifier reached an accuracy of 68.3%. The accuracy of the model was found to depend on the author's gender.","label_nlp4sg":1,"task":[],"method":[],"goal1":"Peace, Justice and Strong Institutions","goal2":null,"goal3":null,"acknowledgments":"This research is supported by a grant from the Russian Foundation for Basic Research, N 15-34-01221 Lie Detection in a Written Text: A Corpus Study.","year":2017,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":1,"sdg17":0} +{"ID":"minow-1969-metaprint","url":"https:\/\/aclanthology.org\/C69-7602","title":"Metaprint 3 (Metaprint 1) Responses to ``Computerized Linguistics: Half a Commentary''","abstract":"Responses to \"COMPUTERIZED LINGUISTICS: KALF A COMMENTARY\" -Martin Minow -Rather than attempt a summary of the replies to\"metaprint\" 1 included here, I feel it would be more useful for me to discuss one of my programs.\nThe program generates sentences from a generative (context-sensitive, transformational) grammar.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":1969,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} +{"ID":"berglund-etal-2006-machine","url":"https:\/\/aclanthology.org\/E06-1049","title":"A Machine Learning Approach to Extract Temporal Information from Texts in Swedish and Generate Animated 3D Scenes","abstract":"Carsim is a program that automatically converts narratives into 3D scenes. Carsim considers authentic texts describing road accidents, generally collected from web sites of Swedish newspapers or transcribed from handwritten accounts by victims of accidents. One of the program's key features is that it animates the generated scene to visualize events.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2006,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} +{"ID":"zetzsche-2014-invited","url":"https:\/\/aclanthology.org\/2014.eamt-1.1","title":"Invited Talk: Encountering the Unknown, Part 2","abstract":"The tasks that the translators were \"charged\" with were to look back at previous responses to technology, put into perspective what MT is in relation to other technologies, differentiate between different forms of MT, employ MT where appropriate, and embrace their whole identity.\nThe MT community was asked to acknowledge the origin of data and linguistic expertise it uses, communicate in terms that are down to earth and truthful, engage the translation community in meaningful ways, listen to the translation community, and embrace their whole identity.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2014,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} +{"ID":"obrien-etal-2009-postediting","url":"https:\/\/aclanthology.org\/2009.mtsummit-tutorials.5","title":"Postediting Machine Translation Output Guidelines","abstract":null,"label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2009,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} +{"ID":"dalrymple-etal-1990-modeling","url":"https:\/\/aclanthology.org\/C90-2013","title":"Modeling syntactic constraints on anaphoric binding","abstract":"Syntactic constraints on antecedent-anaphor relations can be stated within the theory of Lexical Functional Grammar (henceforth LFG) through the use of functional uncertainty (Kaplan and Maxwell 1988; Halvorsen and Kaplan 1988; Ksplan and Zaenen 1989). In the following, we summarize the general characteristics of syntactic constraints on anaphoric binding. Next, we describe a variation of functional uncertainty called inside-out functional uncertainty and show how it can be used to model anaphoric binding. Finally, we discuss some binding constraints claimed to hold in natural language to exemplify the mechanism. We limit our attention throughout to coreference possibilities between definite antecedents and anaphoric elements and ignore interactions with quantifiers. We also limit our discussion to intrasententiM relations.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":1990,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} +{"ID":"ogrodniczuk-lenart-2012-web","url":"http:\/\/www.lrec-conf.org\/proceedings\/lrec2012\/pdf\/648_Paper.pdf","title":"Web Service integration platform for Polish linguistic resources","abstract":"This paper presents a robust linguistic Web service framework for Polish, combining several mature offline linguistic tools in a common online platform. The toolset comprise paragraph-, sentence-and token-level segmenter, morphological analyser, disambiguating tagger, shallow and deep parser, named entity recognizer and coreference resolver. Uniform access to processing results is provided by means of a stand-off packaged adaptation of National Corpus of Polish TEI P5-based representation and interchange format. A concept of asynchronous handling of requests sent to the implemented Web service (Multiservice) is introduced to enable processing large amounts of text by setting up language processing chains of desired complexity. Apart from a dedicated API, a simple Web interface to the service is presented, allowing to compose a chain of annotation services, run it and periodically check for execution results, made available as plain XML or in a simple visualization. Usage examples and results from performance and scalability tests are also included.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"The work reported here was carried out within the Common Language Resources and Technology Infrastructure (CLARIN) project co-funded by the European Commission under the Seventh Framework Programme -Capacities Specific Programme Research Infrastructures (Grant Agreement No 212230) .","year":2012,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} +{"ID":"novello-callaway-2003-porting","url":"https:\/\/aclanthology.org\/W03-2310","title":"Porting to an Italian Surface Realizer: A Case Study","abstract":null,"label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2003,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} +{"ID":"makarov-clematide-2020-cluzh","url":"https:\/\/aclanthology.org\/2020.sigmorphon-1.19","title":"CLUZH at SIGMORPHON 2020 Shared Task on Multilingual Grapheme-to-Phoneme Conversion","abstract":"This paper describes the submission by the team from the Institute of Computational Linguistics, Zurich University, to the Multilingual Grapheme-to-Phoneme Conversion (G2P) Task of the SIGMORPHON 2020 challenge. The submission adapts our system from the 2018 edition of the SIGMORPHON shared task. Our system is a neural transducer that operates over explicit edit actions and is trained with imitation learning. It is well-suited for morphological string transduction partly because it exploits the fact that the input and output character alphabets overlap. The challenge posed by G2P has been to adapt the model and the training procedure to work with disjoint alphabets. We adapt the model to use substitution edits and train it with a weighted finitestate transducer acting as the expert policy. An ensemble of such models produces competitive results on G2P. Our submission ranks second out of 23 submissions by a total of nine teams.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"We would like to thank the organizers for their great effort in these turbulent times. We thank Kyle Gorman for taking the time to help us with our Unicode normalization problem. This work has been supported by the Swiss National Science Foundation under grant CR-SII5 173719.","year":2020,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} +{"ID":"bergmanis-goldwater-2017-segmentation","url":"https:\/\/aclanthology.org\/E17-1032","title":"From Segmentation to Analyses: a Probabilistic Model for Unsupervised Morphology Induction","abstract":"A major motivation for unsupervised morphological analysis is to reduce the sparse data problem in under-resourced languages. Most previous work focuses on segmenting surface forms into their constituent morphs (e.g., taking: tak +ing), but surface form segmentation does not solve the sparse data problem as the analyses of take and taking are not connected to each other. We extend the MorphoChains system (Narasimhan et al., 2015) to provide morphological analyses that can abstract over spelling differences in functionally similar morphs. These analyses are not required to use all the orthographic material of a word (stopping: stop +ing), nor are they limited to only that material (acidified: acid +ify +ed). On average across six typologically varied languages our system has a similar or better F-score on EMMA (a measure of underlying morpheme accuracy) than three strong baselines; moreover, the total number of distinct morphemes identified by our system is on average 12.8% lower than for Morfessor (Virpioja et al., 2013), a stateof-the-art surface segmentation system.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2017,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} +{"ID":"boguraev-pustejovsky-1990-lexical","url":"https:\/\/aclanthology.org\/C90-2007","title":"Lexical Ambiguity and The Role of Knowledge Representation in Lexicon Design","abstract":"The traditional framework ['or ambiguity resolution employs only 'static' knowledge, expressed generally as selectional restrictions or domain specific constraints, and makes uo use of any specific knowledge manipulation mechanisms apart from the simple ability to match valences of structurally related words. In contraust, this paper suggests how a theory of lexical semantics making use of a knowledge representation framework offers a richer, more expressive vocabulary for lexical information. In particular, by performing specialized inference over the ways in which aspects of knowledge structures of words in context c~Ln be composed, mutually compatible and contextully relevant lexieal components of words and phrases are highlighted. In the view presented here, lexical ambiguity resolution is an integral part of the same procedure that creates the semantic interpretation of a sentence itself.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":1990,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} +{"ID":"toral-etal-2014-extrinsic","url":"https:\/\/aclanthology.org\/2014.eamt-1.45","title":"Extrinsic evaluation of web-crawlers in machine translation: a study on Croatian-English for the tourism domain","abstract":"We present an extrinsic evaluation of crawlers of parallel corpora from multilingual web sites in machine translation (MT). Our case study is on Croatian to English translation in the tourism domain. Given two crawlers, we build phrase-based statistical MT systems on the datasets produced by each crawler using different settings. We also combine the best datasets produced by each crawler (union and intersection) to build additional MT systems. Finally we combine the best of the previous systems (union) with general-domain data. This last system outperforms all the previous systems built on crawled data as well as two baselines (a system built on general-domain data and a well known online MT system). * The research leading to these results has received funding from the European Union Seventh Framework Programme FP7\/2007-2013 under grant agreement PIAP-GA-2012-324414 (Abu-MaTran).","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2014,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} +{"ID":"virginie-etal-2014-database","url":"http:\/\/www.lrec-conf.org\/proceedings\/lrec2014\/pdf\/741_Paper.pdf","title":"A Database of Full Body Virtual Interactions Annotated with Expressivity Scores","abstract":"Recent technologies enable the exploitation of full body expressions in applications such as interactive arts but are still limited in terms of dyadic subtle interaction patterns. Our project aims at full body expressive interactions between a user and an autonomous virtual agent. The currently available databases do not contain full body expressivity and interaction patterns via avatars. In this paper, we describe a protocol defined to collect a database to study expressive full-body dyadic interactions. We detail the coding scheme for manually annotating the collected videos. Reliability measures for global annotations of expressivity and interaction are also provided.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"Part of the work described in this paper was funded by the Agence Nationale de la Recherche (ANR): project INGREDIBLE, by the French Image and Networks Cluster (http:\/\/www.images-et-reseaux.com\/en), and by the Cap Digital Cluster (http:\/\/www.capdigital.com\/en\/)","year":2014,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} +{"ID":"bella-etal-2020-major","url":"https:\/\/aclanthology.org\/2020.lrec-1.342","title":"A Major Wordnet for a Minority Language: Scottish Gaelic","abstract":"We present a new wordnet resource for Scottish Gaelic, a Celtic minority language spoken by about 60,000 speakers, most of whom live in Northwestern Scotland. The wordnet contains over 15 thousand word senses and was constructed by merging ten thousand new, high-quality translations, provided and validated by language experts, with an existing wordnet derived from Wiktionary. This new, considerably extended wordnet-currently among the 30 largest in the world-targets multiple communities: language speakers and learners; linguists; computer scientists solving problems related to natural language processing. By publishing it as a freely downloadable resource, we hope to contribute to the long-term preservation of Scottish Gaelic as a living language, both offline and on the Web.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"This research was funded by the University of Edinburgh through the DReaM Group EPSRC Platform Grant EP\/N014758\/1, as well as by the University of Trento through the InteropEHRate project. InteropEHRate is funded by the European Union's Horizon2020 Research and Innovation programme under grant agreement number 826106.","year":2020,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} +{"ID":"mcconnaughey-etal-2017-labeled","url":"https:\/\/aclanthology.org\/D17-1077","title":"The Labeled Segmentation of Printed Books","abstract":"We introduce the task of book structure labeling: segmenting and assigning a fixed category (such as TABLE OF CONTENTS, PREFACE, INDEX) to the document structure of printed books. We manually annotate the page-level structural categories for a large dataset totaling 294,816 pages in 1,055 books evenly sampled from 1750-1922, and present empirical results comparing the performance of several classes of models. The best-performing model, a bidirectional LSTM with rich features, achieves an overall accuracy of 95.8 and a class-balanced macro F-score of 71.4.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"Many thanks to the anonymous reviewers and Hannah Alpert-Abrams and for their valuable feedback, and to the HathiTrust Research Center for their assistance in enabling this work. The research reported in this article was supported by a grant from the Digital Humanities at Berkeley initiative and resources provided by NVIDIA.","year":2017,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} +{"ID":"andy-etal-2021-understanding","url":"https:\/\/aclanthology.org\/2021.louhi-1.3","title":"Understanding Social Support Expressed in a COVID-19 Online Forum","abstract":"In online forums focused on health and wellbeing, individuals tend to seek and give the following social support: emotional and informational support. Understanding the expressions of these social supports in an online COVID-19 forum is important for: (a) the forum and its members to provide the right type of support to individuals and (b) determining the long term effects of the COVID-19 pandemic on the well-being of the public, thereby informing interventions. In this work, we build four machine learning models to measure the extent of the following social supports expressed in each post in a COVID-19 online forum: (a) emotional support given (b) emotional support sought (c) informational support given, and (d) informational support sought. Using these models, we aim to: (i) determine if there is a correlation between the different social supports expressed in posts e.g. when members of the forum give emotional support in posts, do they also tend to give or seek informational support in the same post? (ii) determine how these social supports sought and given changes over time in published posts. We find that (i) there is a positive correlation between the informational support given in posts and the emotional support given and emotional support sought, respectively, in these posts and (ii) over time, users tended to seek more emotional support and give less emotional support.\nGlobally, millions of individuals have contracted COVID-19 and more than 2 million people have died from the pandemic as of January 2021 1 . Individuals are turning to online forums focused on discussions around COVID-19 to seek and give support . In online health and well-being forums, individuals tend to seek and 1 https:\/\/coronavirus.jhu.edu\/map.html give two forms of social support: emotional and informational support (Wang et al., 2012; Yang et al., 2017) ; where: (a) emotional support sought seeks understanding, affirmation and encouragement, (b) emotional support given includes providing encouragement, (c) informational support sought seeks advice or information, and (d) informational support given provides advice and information. Below are examples (rephrased) of posts that express these social supports in a COVID-19 related online forum:","label_nlp4sg":1,"task":[],"method":[],"goal1":"Good Health and Well-Being","goal2":"Peace, Justice and Strong Institutions","goal3":null,"acknowledgments":null,"year":2021,"sdg1":0,"sdg2":0,"sdg3":1,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":1,"sdg17":0} +{"ID":"mamani-sanchez-etal-2010-exploiting","url":"https:\/\/aclanthology.org\/W10-3018","title":"Exploiting CCG Structures with Tree Kernels for Speculation Detection","abstract":"Our CoNLL-2010 speculative sentence detector disambiguates putative keywords based on the following considerations: a speculative keyword may be composed of one or more word tokens; a speculative sentence may have one or more speculative keywords; and if a sentence contains at least one real speculative keyword, it is deemed speculative. A tree kernel classifier is used to assess whether a potential speculative keyword conveys speculation. We exploit information implicit in tree structures. For prediction efficiency, only a segment of the whole tree around a speculation keyword is considered, along with morphological features inside the segment and information about the containing document. A maximum entropy classifier is used for sentences not covered by the tree kernel classifier. Experiments on the Wikipedia data set show that our system achieves 0.55 F-measure (in-domain).","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"This research is supported by the Trinity College Research Scholarship Program and the Science Foundation Ireland (Grant 07\/CE\/I1142) as part of the Centre for Next Generation Localisation (www.cngl.ie) at Trinity College of Dublin.","year":2010,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} +{"ID":"gardent-etal-1989-efficient","url":"https:\/\/aclanthology.org\/P89-1034","title":"Efficient Parsing for French","abstract":"Parsing with categorial grammars often leads to problems such as proliferating lexical ambiguity, spurious parses and overgeneration. This paper presents a parser for French developed on an unification based categorial grammar (FG) which avoids these problem s. This parser is a bottom-up c hart parser augmented with a heuristic eliminating spurious parses. The unicity and completeness of parsing are proved.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":1989,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} +{"ID":"baum-etal-2010-disco","url":"http:\/\/www.lrec-conf.org\/proceedings\/lrec2010\/pdf\/355_Paper.pdf","title":"DiSCo - A German Evaluation Corpus for Challenging Problems in the Broadcast Domain","abstract":"Typical broadcast material contains not only studio-recorded texts read by trained speakers, but also spontaneous and dialect speech, debates with cross-talk, voice-overs, and on-site reports with difficult acoustic environments. Standard approaches to speech and speaker recognition usually deteriorate under such conditions. This paper reports on the design, construction, and experimental analysis of DiSCo, a German corpus for the evaluation of speech and speaker recognition on challenging material from the broadcast domain. One of the key requirements for the design of this corpus was a good coverage of different types of serious programmes beyond clean speech and planned speech broadcast news. Corpus annotation encompasses manual segmentation, an orthographic transcription, and labelling with speech mode, dialect, and noise type. We indicate typical use cases for the corpus by reporting results from ASR, speech search, and speaker recognition on the new corpus, thereby obtaining insights into the difficulty of audio recognition on the various classes.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2010,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} +{"ID":"sproat-1990-application","url":"https:\/\/aclanthology.org\/O90-1010","title":"An application of statistical optimization with dynamic programming to phonemic-input-to-character conversion for Chinese","abstract":null,"label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":1990,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} +{"ID":"makhija-etal-2020-hinglishnorm","url":"https:\/\/aclanthology.org\/2020.coling-industry.13","title":"hinglishNorm - A Corpus of Hindi-English Code Mixed Sentences for Text Normalization","abstract":"We present hinglishNorm-a human annotated corpus of Hindi-English code-mixed sentences for text normalization task. Each sentence in the corpus is aligned to its corresponding human annotated normalized form. To the best of our knowledge, there is no corpus of Hindi-English code-mixed sentences for text normalization task that is publicly available. Our work is the first attempt in this direction. The corpus contains 13494 segments annotated for text normalization. Further, we present baseline normalization results on this corpus. We obtain a Word Error Rate (WER) of 15.55, BiLingual Evaluation Understudy (BLEU) score of 71.2, and Metric for Evaluation of Translation with Explicit ORdering (METEOR) score of 0.50.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2020,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} +{"ID":"akula-etal-2021-mind","url":"https:\/\/aclanthology.org\/2021.emnlp-main.516","title":"Mind the Context: The Impact of Contextualization in Neural Module Networks for Grounding Visual Referring Expressions","abstract":"Neural module networks (NMN) are a popular approach for grounding visual referring expressions. Prior implementations of NMN use pre-defined and fixed textual inputs in their module instantiation. This necessitates a large number of modules as they lack the ability to share weights and exploit associations between similar textual contexts (e.g. \"dark cube on the left\" vs. \"black cube on the left\"). In this work, we address these limitations and evaluate the impact of contextual clues in improving the performance of NMN models. First, we address the problem of fixed textual inputs by parameterizing the module arguments. This substantially reduce the number of modules in NMN by up to 75% without any loss in performance. Next we propose a method to contextualize our parameterized model to enhance the module's capacity in exploiting the visiolinguistic associations. Our model outperforms the state-of-the-art NMN model on CLEVR-Ref+ dataset with +8.1% improvement in accuracy on the single-referent test set and +4.3% on the full test set. Additionally, we demonstrate that contextualization provides +11.2% and +1.7% improvements in accuracy over prior NMN models on CLOSURE and NLVR2. We further evaluate the impact of our contextualization by constructing a contrast set for CLEVR-Ref+, which we call CC-Ref+. We significantly outperform the baselines by as much as +10.4% absolute accuracy on CC-Ref+, illustrating the generalization skills of our approach. Our dataset is publicly available at https:\/\/github.com\/ McGill-NLP\/contextual-nmn.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"We would like to thank Joyce Chai, Runtao Liu, Chenxi Liu and Yutong Bai for helpful discussions. We are grateful to the anonymous reviewers for their useful feedback.","year":2021,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} +{"ID":"kishimoto-etal-2020-adapting","url":"https:\/\/aclanthology.org\/2020.lrec-1.145","title":"Adapting BERT to Implicit Discourse Relation Classification with a Focus on Discourse Connectives","abstract":"BERT, a neural network-based language model pre-trained on large corpora, is a breakthrough in natural language processing, significantly outperforming previous state-of-the-art models in numerous tasks. However, there have been few reports on its application to implicit discourse relation classification, and it is not clear how BERT is best adapted to the task. In this paper, we test three methods of adaptation. (1) We perform additional pre-training on text tailored to discourse classification. (2) In expectation of knowledge transfer from explicit discourse relations to implicit discourse relations, we add a task named explicit connective prediction at the additional pre-training step. (3) To exploit implicit connectives given by treebank annotators, we add a task named implicit connective prediction at the fine-tuning step. We demonstrate that these three techniques can be combined straightforwardly in a single training pipeline. Through comprehensive experiments, we found that the first and second techniques provide additional gain while the last one did not.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2020,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} +{"ID":"schmidt-etal-1996-lean","url":"https:\/\/aclanthology.org\/C96-1049","title":"Lean Formalisms, Linguistic Theory and Applications. Grammar Development in ALEP.","abstract":"This paper describes results achieved in a project which addresses the issue of how the gap between unification-based grammars as a scientific concept and real world applications can be narrowed down 1. Application-oriented grammar development has to take into account the following parameters: Efficiency: The project chose a so called 'lean' formal ism, a term-encodable language providing efficient term unification, ALEP. Coverage: The project adopted a corpus-based approach. Completeness: All modules needed from text handling to semantics must be there. The paper reports on a text handling component, Two Level morphology, word structure, phrase structure, semantics and the interfaces between these components. Mainstream approach: The approach claims to be mainstream, very much indebted to HPSG, thus based on the currently most prominent and recent linguistic theory. The relation (and tension) between these parameters are described in this paper.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":1996,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} +{"ID":"liu-ng-2012-character","url":"https:\/\/aclanthology.org\/P12-1097","title":"Character-Level Machine Translation Evaluation for Languages with Ambiguous Word Boundaries","abstract":"In this work, we introduce the TESLA-CELAB metric (Translation Evaluation of Sentences with Linear-programming-based Analysis-Character-level Evaluation for Languages with Ambiguous word Boundaries) for automatic machine translation evaluation. For languages such as Chinese where words usually have meaningful internal structure and word boundaries are often fuzzy, TESLA-CELAB acknowledges the advantage of character-level evaluation over word-level evaluation. By reformulating the problem in the linear programming framework, TESLA-CELAB addresses several drawbacks of the character-level metrics, in particular the modeling of synonyms spanning multiple characters. We show empirically that TESLA-CELAB significantly outperforms characterlevel BLEU in the English-Chinese translation evaluation tasks.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"This research is supported by the Singapore National Research Foundation under its International Research Centre @ Singapore Funding Initiative and administered by the IDM Programme Office.","year":2012,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} +{"ID":"kapustin-kapustin-2019-modeling","url":"https:\/\/aclanthology.org\/W19-0604","title":"Modeling language constructs with fuzzy sets: some approaches, examples and interpretations","abstract":"We present and discuss a couple of approaches, including different types of projections, and some examples, discussing the use of fuzzy sets for modeling meaning of certain types of language constructs. We are mostly focusing on words other than adjectives and linguistic hedges as these categories are the most studied from before. We discuss logical and linguistic interpretations of membership functions. We argue that using fuzzy sets for modeling meaning of words and other natural language constructs, along with situations described with natural language is interesting both from purely linguistic perspective, and also as a meaning representation for problems of computational linguistics and natural language processing.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"We thank Vadim Kimmelman and Csaba Veres for helpful discussions and comments. We thank anonymous reviewers for helpful feedback.","year":2019,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} +{"ID":"whitehead-etal-2018-incorporating","url":"https:\/\/aclanthology.org\/D18-1433","title":"Incorporating Background Knowledge into Video Description Generation","abstract":"Most previous efforts toward video captioning focus on generating generic descriptions, such as, \"A man is talking.\" We collect a news video dataset to generate enriched descriptions that include important background knowledge, such as named entities and related events, which allows the user to fully understand the video content. We develop an approach that uses video meta-data to retrieve topically related news documents for a video and extracts the events and named entities from these documents. Then, given the video as well as the extracted events and entities, we generate a description using a Knowledgeaware Video Description network. The model learns to incorporate entities found in the topically related documents into the description via an entity pointer network and the generation procedure is guided by the event and entity types from the topically related documents through a knowledge gate, which is a gating mechanism added to the model's decoder that takes a one-hot vector of these types. We evaluate our approach on the new dataset of news videos we have collected, establishing the first benchmark for this dataset as well as proposing a new metric to evaluate these descriptions.\nVideo captioning is a challenging task that seeks to automatically generate a natural language description of the content of a video. Many video captioning efforts focus on learning video representations that model the spatial and temporal dynamics of the videos Venugopalan et al., 2016; Yu et al., 2017) . Although the language generation component within this task is of great importance, less work has been done to enhance the contextual knowledge conveyed by the descriptions. The descriptions generated by previous methods tend to be \"generic\", describing only what is evidently visible and lacking specific knowledge, like named entities and event participants, as shown in Figure 1a . In many situations, however, generic descriptions are uninformative as they do not provide contextual knowledge. For example, in Figure 1b , details such as who is speaking or why they are speaking are imperative to truly understanding the video, since contextual knowledge gives the surrounding circumstances or cause of the depicted events. To address this problem, we collect a news video dataset, where each video is accompanied by meta-data (e.g., tags and date) and a natural language description of the content in, and\/or context around, the video. We create an approach to this task that is motivated by two observations. First, the video content alone is insufficient to generate the description. Named entities or specific events are necessary to identify the participants, location, and\/or cause of the video content. Although knowledge could potentially be mined from visual evidence (e.g., recognizing the location), training such a system is exceedingly diffi-cult (Tran et al., 2016) . Further, not all the knowledge necessary for the description may appear in the video. In Figure 2a , the video depicts much of the description content, but knowledge of the speaker (\"Carles Puigdemont\") is unavailable if limited to the visual evidence because the speaker never appears in the video, making it intractable to incorporate this knowledge into the description.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"This work was supported by the U.S. DARPA AIDA Program No. FA8750-18-2-0014 and U.S. ARL NS-CTA No. W911NF-09-2-0053. The views and conclusions contained in this document are those of the authors and should not be interpreted as representing the official policies, either expressed or implied, of the U.S. Government. The U.S. Government is authorized to reproduce and distribute reprints for Government purposes notwithstanding any copyright notation here on.","year":2018,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} +{"ID":"fam-lepage-2018-tools","url":"https:\/\/aclanthology.org\/L18-1171","title":"Tools for The Production of Analogical Grids and a Resource of N-gram Analogical Grids in 11 Languages","abstract":"We release a Python module containing several tools to build analogical grids from words contained in a corpus. The module implements several previously presented algorithms. The tools are language-independent. This permits their use with any language and any writing system. We hope that the tools will ease research in morphology by allowing researchers to automatically obtain structured representations of the vocabulary contained in corpora or linguistic data. We also release analogical grids built on the vocabularies contained in 1,000 corresponding lines of the 11 different language versions of the Europarl corpus v.3. The grids were built on N-grams of different lengths, from words to 6-grams. We hope that the use of structured parallel data will foster research in comparative linguistics.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2018,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} +{"ID":"almeida-costa-etal-2020-building","url":"https:\/\/aclanthology.org\/2020.coling-main.533","title":"Building The First English-Brazilian Portuguese Corpus for Automatic Post-Editing","abstract":"This paper introduces the first corpus for Automatic Post-Editing of English and a low-resource language, Brazilian Portuguese. The source English texts were extracted from the WebNLG corpus and automatically translated into Portuguese using a state-of-the-art industrial neural machine translator. Post-edits were then obtained in an experiment with native speakers of Brazilian Portuguese. To assess the quality of the corpus, we performed error analysis and computed complexity indicators measuring how difficult the APE task would be. We report preliminary results of Phrase-Based and Neural Machine Translation Models on this new corpus. Data and code publicly available in our repository. 1","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"This research was partially funded by the agencies CNPq, CAPES, and FAPEMIG. In particular, the researchers were supported by CNPQ grant No. 310630\/2017-7, CAPES Post doctoral grant No. 88887.508597\/2020-00, and FAPEMIG grant APQ-01.461-14. This work was also supported by projects MASWeb, EUBra-BIGSEA, INCT-CYBER, and ATMOSPHERE. The authors also wish to express their gratitude to Deepl for kindly granting a license to translate our corpus, and to the students at UFMG who took part in the post-editing experiment.","year":2020,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} +{"ID":"banik-etal-2012-natural","url":"https:\/\/aclanthology.org\/W12-1521","title":"Natural Language Generation for a Smart Biology Textbook","abstract":"In this demo paper we describe the natural language generation component of an electronic textbook application, called Inquire 1 . Inquire interacts with a knowledge base which encodes information from a biology textbook. The application includes a question-understanding module which allows students to ask questions about the contents of the book, and a questionanswering module which retrieves the corresponding answer from the knowledge base. The task of the natural language generation module is to present specific parts of the answer in English. Our current generation pipeline handles inputs that describe the biological functions of entities, the steps of biological processes, and the spatial relations between parts of entities. Our ultimate goal is to generate paragraphlength texts from arbitrary paths in the knowledge base. We describe here the natural language generation pipeline and demonstrate the inputs and generated texts. In the demo presentation we will show the textbook application and the knowledge base authoring environment, and provide an opportunity to interact with the system.","label_nlp4sg":1,"task":[],"method":[],"goal1":"Quality Education","goal2":null,"goal3":null,"acknowledgments":null,"year":2012,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":1,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} +{"ID":"blodgett-schneider-2019-improved","url":"https:\/\/aclanthology.org\/W19-0405","title":"An Improved Approach for Semantic Graph Composition with CCG","abstract":"This paper builds on previous work using Combinatory Categorial Grammar (CCG) to derive a transparent syntax-semantics interface for Abstract Meaning Representation (AMR) parsing. We define new semantics for the CCG combinators that is better suited to deriving AMR graphs. In particular, we define relation-wise alternatives for the application and composition combinators: these require that the two constituents being combined overlap in one AMR relation. We also provide a new semantics for type raising, which is necessary for certain constructions. Using these mechanisms, we suggest an analysis of eventive nouns, which present a challenge for deriving AMR graphs. Our theoretical analysis will facilitate future work on robust and transparent AMR parsing using CCG.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"We want to thank Paul Portner, Adam Lopez, members of the NERT lab at Georgetown, and anonymous reviewers for their helpful feedback on this research, as well as Matthew Honnibal, Siva Reddy, and Mark Steedman for early discussions about light verbs in CCG.","year":2019,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} +{"ID":"hirschman-etal-2001-integrated","url":"https:\/\/aclanthology.org\/H01-1038","title":"Integrated Feasibility Experiment for Bio-Security: IFE-Bio, A TIDES Demonstration","abstract":"As part of MITRE's work under the DARPA TIDES (Translingual Information Detection, Extraction and Summarization) program, we are preparing a series of demonstrations to showcase the TIDES Integrated Feasibility Experiment on Bio-Security (IFE-Bio). The current demonstration illustrates some of the resources that can be made available to analysts tasked with monitoring infectious disease outbreaks and other biological threats.","label_nlp4sg":1,"task":[],"method":[],"goal1":"Good Health and Well-Being","goal2":null,"goal3":null,"acknowledgments":null,"year":2001,"sdg1":0,"sdg2":0,"sdg3":1,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} +{"ID":"yang-etal-2019-exploiting","url":"https:\/\/aclanthology.org\/N19-1325","title":"Exploiting Noisy Data in Distant Supervision Relation Classification","abstract":"Distant supervision has obtained great progress on relation classification task. However, it still suffers from noisy labeling problem. Different from previous works that underutilize noisy data which inherently characterize the property of classification, in this paper, we propose RCEND, a novel framework to enhance Relation Classification by Exploiting Noisy Data. First, an instance discriminator with reinforcement learning is designed to split the noisy data into correctly labeled data and incorrectly labeled data. Second, we learn a robust relation classifier in semi-supervised learning way, whereby the correctly and incorrectly labeled data are treated as labeled and unlabeled data respectively. The experimental results show that our method outperforms the state-of-the-art models.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"We would like to express gratitude to Robert Ridley and the anonymous reviewers for their valuable feedback on the paper. This work is supported by the National Natural Science Foundation of China (No. 61672277, U1836221) , the Jiangsu Provincial Research Foundation for Basic Research (No. BK20170074).","year":2019,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} +{"ID":"fang-etal-2018-sounding","url":"https:\/\/aclanthology.org\/N18-5020","title":"Sounding Board: A User-Centric and Content-Driven Social Chatbot","abstract":"We present Sounding Board, a social chatbot that won the 2017 Amazon Alexa Prize. The system architecture consists of several components including spoken language processing, dialogue management, language generation, and content management, with emphasis on user-centric and content-driven design. We also share insights gained from large-scale online logs based on 160,000 conversations with real-world users.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"In addition to the Alexa Prize financial and cloud computing support, this work was supported in part by NSF Graduate Research Fellowship (awarded to E. Clark), NSF (IIS-1524371), and DARPA CwC program through ARO (W911NF-15-1-0543). The conclusions and findings are those of the authors and do not necessarily reflect the views of sponsors.","year":2018,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} +{"ID":"lai-etal-2019-cuhk","url":"https:\/\/aclanthology.org\/K19-2010","title":"CUHK at MRP 2019: Transition-Based Parser with Cross-Framework Variable-Arity Resolve Action","abstract":"This paper describes our system (RE-SOLVER) submitted to the CoNLL 2019 shared task on Cross-Framework Meaning Representation Parsing (MRP). Our system implements a transition-based parser with a directed acyclic graph (DAG) to tree preprocessor and a novel cross-framework variable-arity resolve action that generalizes over five different representations. Although we ranked low in the competition, we have shown the current limitations and potentials of including variable-arity action in MRP and concluded with directions for improvements in the future.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2019,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} +{"ID":"sproat-etal-2014-database","url":"http:\/\/www.lrec-conf.org\/proceedings\/lrec2014\/pdf\/47_Paper.pdf","title":"A Database for Measuring Linguistic Information Content","abstract":"Which languages convey the most information in a given amount of space? This is a question often asked of linguists, especially by engineers who often have some information theoretic measure of \"information\" in mind, but rarely define exactly how they would measure that information. The question is, in fact remarkably hard to answer, and many linguists consider it unanswerable. But it is a question that seems as if it ought to have an answer. If one had a database of close translations between a set of typologically diverse languages, with detailed marking of morphosyntactic and morphosemantic features, one could hope to quantify the differences between how these different languages convey information. Since no appropriate database exists we decided to construct one. The purpose of this paper is to present our work on the database, along with some preliminary results. We plan to release the dataset once complete.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"We wish to thank the language experts who helped us with designing language-particular feature sets and annotating the data: Costanza Asnaghi, Elixabete Murguia Gomez, Zainab Hossainzadeh, Josie Li, Thomas Meyer, Fayeq Oweis, Tanya Scott. Thanks also to Daniel van Esch for helping arrange for some of the annotation work.","year":2014,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} +{"ID":"karan-etal-2013-frequently","url":"https:\/\/aclanthology.org\/W13-2405","title":"Frequently Asked Questions Retrieval for Croatian Based on Semantic Textual Similarity","abstract":"Frequently asked questions (FAQ) are an efficient way of communicating domainspecific information to the users. Unlike general purpose retrieval engines, FAQ retrieval engines have to address the lexical gap between the query and the usually short answer. In this paper we describe the design and evaluation of a FAQ retrieval engine for Croatian. We frame the task as a binary classification problem, and train a model to classify each FAQ as either relevant or not relevant for a given query. We use a variety of semantic textual similarity features, including term overlap and vector space features. We train and evaluate on a FAQ test collection built specifically for this purpose. Our best-performing model reaches 0.47 of mean reciprocal rank, i.e., on average ranks the relevant answer among the top two returned answers.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"This work has been supported by the Ministry of Science, Education and Sports, Republic of Croatia under the Grant 036-1300646-1986. We thank the reviewers for their constructive comments.","year":2013,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} +{"ID":"marsh-1998-tipster","url":"https:\/\/aclanthology.org\/X98-1029","title":"TIPSTER Information Extraction Evaluation: The MUC-7 Workshop","abstract":"The last of the \"Message Understanding Conferences\", which were designed to evaluate text extraction systems, was held in April 1998 in Fairfax, Virginia. The workshop was co-chaired by Elaine Marsh and Ralph Grishman. A group of 18 organizations, both from the United States and abroad, participated in the evaluation.\nMUC-7 introduced a wider set of tasks with larger sets of training and formal data than previous MUCs. Results showed that while performance on the named entity and template elements task remains relatively high, additional research is still necessary for improved performance on more difficult tasks such as coreference resolution and domain-specific template generation from textual sources.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":1998,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} +{"ID":"yang-etal-2016-chinese","url":"https:\/\/aclanthology.org\/W16-4920","title":"Chinese Grammatical Error Diagnosis Using Single Word Embedding","abstract":"Automatic grammatical error detection for Chinese has been a big challenge for NLP researchers. Due to the formal and strict grammar rules in Chinese, it is hard for foreign students to master Chinese. A computer-assisted learning tool which can automatically detect and correct Chinese grammatical errors is necessary for those foreign students. Some of the previous works have sought to identify Chinese grammatical errors using template-and learning-based methods. In contrast, this study introduced convolutional neural network (CNN) and long-short term memory (LSTM) for the shared task of Chinese Grammatical Error Diagnosis (CGED). Different from traditional word-based embedding, single word embedding was used as input of CNN and LSTM. The proposed single word embedding can capture both semantic and syntactic information to detect those four type grammatical error. In experimental evaluation, the recall and f1-score of our submitted results Run1 of the TOCFL testing data ranked the fourth place in all submissions in detection-level.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"This work is supported by The Natural Science Foundation of Yunnan Province (Nos. 2013FB010).","year":2016,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} +{"ID":"raunak-etal-2020-dimensional","url":"https:\/\/aclanthology.org\/2020.repl4nlp-1.19","title":"On Dimensional Linguistic Properties of the Word Embedding Space","abstract":"Word embeddings have become a staple of several natural language processing tasks, yet much remains to be understood about their properties. In this work, we analyze word embeddings in terms of their principal components and arrive at a number of novel and counterintuitive observations. In particular, we characterize the utility of variance explained by the principal components as a proxy for downstream performance. Furthermore, through syntactic probing of the principal embedding space, we show that the syntactic information captured by a principal component does not correlate with the amount of variance it explains. Consequently, we investigate the limitations of variance based embedding post-processing, used in a few algorithms such as (Mu and Viswanath, 2018; Raunak et al., 2019) and demonstrate that such postprocessing is counter-productive in sentence classification and machine translation tasks. Finally, we offer a few precautionary guidelines on applying variance based embedding post-processing and explain why non-isotropic geometry might be integral to word embedding performance.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2020,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} +{"ID":"ferret-2021-using","url":"https:\/\/aclanthology.org\/2021.paclic-1.20","title":"Using Distributional Principles for the Semantic Study of Contextual Language Models","abstract":"Many studies were recently done for investigating the properties of contextual language models but surprisingly, only a few of them consider the properties of these models in terms of semantic similarity. In this article, we first focus on these properties for English by exploiting the distributional principle of substitution as a probing mechanism in the controlled context of SemCor and WordNet paradigmatic relations. Then, we propose to adapt the same method to a more open setting for characterizing the differences between static and contextual language models.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"This work has been partially funded by French National Research Agency (ANR) under project AD-DICTE (ANR-17-CE23-0001).","year":2021,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} +{"ID":"supnithi-etal-2010-autotagtcg","url":"http:\/\/www.lrec-conf.org\/proceedings\/lrec2010\/pdf\/868_Paper.pdf","title":"AutoTagTCG : A Framework for Automatic Thai CG Tagging","abstract":"Recently, categorical grammar has been focused as a powerful grammar. This paper aims to develop a framework for automatic CG tagging for Thai. We investigated two main algorithms, CRF and Statistical alignment model based on information theory (SAM). We found that SAM gives the best results both in word level and sentence level. We got the accuracy 89.25% in word level and 82.49% in sentence level. SAM is better than CRF in known word. On the other hand, CRF is better than SAM when we applied for unknown word. Combining both methods can be suited for both known and unknown word.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2010,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} +{"ID":"sukhareva-etal-2017-distantly","url":"https:\/\/aclanthology.org\/W17-2213","title":"Distantly Supervised POS Tagging of Low-Resource Languages under Extreme Data Sparsity: The Case of Hittite","abstract":"This paper presents a statistical approach to automatic morphosyntactic annotation of Hittite transcripts. Hittite is an extinct Indo-European language using the cuneiform script. There are currently no morphosyntactic annotations available for Hittite, so we explored methods of distant supervision. The annotations were projected from parallel German translations of the Hittite texts. In order to reduce data sparsity, we applied stemming of German and Hittite texts. As there is no off-theshelf Hittite stemmer, a stemmer for Hittite was developed for this purpose. The resulting annotation projections were used to train a POS tagger, achieving an accuracy of 69% on a test sample. To our knowledge, this is the first attempt of statistical POS tagging of a cuneiform language.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"The first and third author were supported by the German Federal Ministry of Education and Research (BMBF) under the promotional reference 01UG1416B (CEDIFOR).","year":2017,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} +{"ID":"ren-etal-2021-rocketqav2","url":"https:\/\/aclanthology.org\/2021.emnlp-main.224","title":"RocketQAv2: A Joint Training Method for Dense Passage Retrieval and Passage Re-ranking","abstract":"In various natural language processing tasks, passage retrieval and passage re-ranking are two key procedures in finding and ranking relevant information. Since both the two procedures contribute to the final performance, it is important to jointly optimize them in order to achieve mutual improvement. In this paper, we propose a novel joint training approach for dense passage retrieval and passage reranking. A major contribution is that we introduce the dynamic listwise distillation, where we design a unified listwise training approach for both the retriever and the re-ranker. During the dynamic distillation, the retriever and the re-ranker can be adaptively improved according to each other's relevance information. We also propose a hybrid data augmentation strategy to construct diverse training instances for listwise training approach. Extensive experiments show the effectiveness of our approach on both MSMARCO and Natural Questions datasets. Our code is available at https:\/\/ github.com\/PaddlePaddle\/RocketQA. * Equal contribution. The work was done when Ruiyang Ren was doing internship at Baidu.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2021,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} +{"ID":"lee-etal-2016-call","url":"https:\/\/aclanthology.org\/P16-1093","title":"A CALL System for Learning Preposition Usage","abstract":"Fill-in-the-blank items are commonly featured in computer-assisted language learning (CALL) systems. An item displays a sentence with a blank, and often proposes a number of choices for filling it. These choices should include one correct answer and several plausible distractors. We describe a system that, given an English corpus, automatically generates distractors to produce items for preposition usage. We report a comprehensive evaluation on this system, involving both experts and learners. First, we analyze the difficulty levels of machine-generated carrier sentences and distractors, comparing several methods that exploit learner error and learner revision patterns. We show that the quality of machine-generated items approaches that of human-crafted ones. Further, we investigate the extent to which mismatched L1 between the user and the learner corpora affects the quality of distractors. Finally, we measure the system's impact on the user's language proficiency in both the short and the long term.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"We thank NetDragon Websoft Holding Limited for their assistance with system evaluation, and the reviewers for their very helpful comments. This work was partially supported by an Applied Research Grant (Project no. 9667115) from City University of Hong Kong.","year":2016,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} +{"ID":"allen-frisch-1982-whats","url":"https:\/\/aclanthology.org\/P82-1004","title":"What's in a Semantic Network?","abstract":"Ever since Woods's \"What's in a Link\" paper, there has been a growing concern for formalization in the study of knowledge representation. Several arguments have been made that frame representation languages and semantic-network languages are syntactic variants of the ftrst-order predicate calculus (FOPC). The typical argument proceeds by showing how any given frame or network representation can be mapped to a logically isomorphic FOPC representation. For the past two years we have been studying the formalization of knowledge retrievers as well as the representation languages that they operate on. This paper presents a representation language in the notation of FOPC whose form facilitates the design of a semantic-network-like retriever.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"This research was supported in part by the National Science Foundation under Grant IST-80-12418, and in part by the Office of Naval Research under Grant N00014-80-C-0197.","year":1982,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} +{"ID":"priban-steinberger-2021-multilingual","url":"https:\/\/aclanthology.org\/2021.ranlp-1.128","title":"Are the Multilingual Models Better? Improving Czech Sentiment with Transformers","abstract":"In this paper, we aim at improving Czech sentiment with transformer-based models and their multilingual versions. More concretely, we study the task of polarity detection for the Czech language on three sentiment polarity datasets. We fine-tune and perform experiments with five multilingual and three monolingual models. We compare the monolingual and multilingual models' performance, including comparison with the older approach based on recurrent neural networks. Furthermore, we test the multilingual models and their ability to transfer knowledge from English to Czech (and vice versa) with zero-shot cross-lingual classification. Our experiments show that the huge multilingual models can overcome the performance of the monolingual models. They are also able to detect polarity in another language without any training data, with performance not worse than 4.4 % compared to stateof-the-art monolingual trained models. Moreover, we achieved new state-of-the-art results on all three datasets.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"This work has been partly supported by ERDF \"Research and Development of Intelligent Components of Advanced Technologies for the Pilsen Metropolitan Area (InteCom)\" (no.: CZ.02.1.01\/0.0\/0.0\/17 048\/0007267); and by Grant No. SGS-2019-018 Processing of heterogeneous data and its specialized applications. Computational resources were supplied by the project \"e-Infrastruktura CZ\" (e-","year":2021,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} +{"ID":"hajicova-2014-three","url":"http:\/\/www.lrec-conf.org\/proceedings\/lrec2014\/pdf\/39_Paper.pdf","title":"Three dimensions of the so-called ``interoperability'' of annotation schemes''","abstract":"Interoperability\" of annotation schemes is one of the key words in the discussions about annotation of corpora. In the present contribution, we propose to look at the so-called interoperability from (at least) three angles, namely (i) as a relation (and possible interaction or cooperation) of different annotation schemes for different layers or phenomena of a single language, (ii) the possibility to annotate different languages by a single (modified or not) annotation scheme, and (iii) the relation between different annotation schemes for a single language, or for a single phenomenon or layer of the same language. The pros and cons of each of these aspects are discussed as well as their contribution to linguistic studies and natural language processing. It is stressed that a communication and collaboration between different annotation schemes requires an explicit specification and consistency of each of the schemes.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2014,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} +{"ID":"zavrel-daelemans-1997-memory","url":"https:\/\/aclanthology.org\/P97-1056","title":"Memory-Based Learning: Using Similarity for Smoothing","abstract":"This paper analyses the relation between the use of similarity in Memory-Based Learning and the notion of backed-off smoothing in statistical language modeling. We show that the two approaches are closely related, and we argue that feature weighting methods in the Memory-Based paradigm can offer the advantage of automatically specifying a suitable domainspecific hierarchy between most specific and most general conditioning information without the need for a large number of parameters. We report two applications of this approach: PP-attachment and POStagging. Our method achieves state-of-theart performance in both domains, and allows the easy integration of diverse information sources, such as rich lexical representations.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"This research was done in the context of the \"Induction of Linguistic Knowledge\" research programme, partially supported by the Foundation for Language Speech and Logic (TSL), which is funded by the Netherlands Organization for Scientific Research (NWO). We would like to thank Peter Berck and Anders Green for their help with software for the experiments.","year":1997,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} +{"ID":"koehn-etal-2009-462","url":"https:\/\/aclanthology.org\/2009.mtsummit-papers.7","title":"462 Machine Translation Systems for Europe","abstract":"We built 462 machine translation systems for all language pairs of the Acquis Communautaire corpus. We report and analyse the performance of these system, and compare them against pivot translation and a number of system combination methods (multi-pivot, multisource) that are possible due to the available systems.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2009,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} +{"ID":"lekhtman-etal-2021-dilbert","url":"https:\/\/aclanthology.org\/2021.emnlp-main.20","title":"DILBERT: Customized Pre-Training for Domain Adaptation with Category Shift, with an Application to Aspect Extraction","abstract":"The rise of pre-trained language models has yielded substantial progress in the vast majority of Natural Language Processing (NLP) tasks. However, a generic approach towards the pre-training procedure can naturally be sub-optimal in some cases. Particularly, finetuning a pre-trained language model on a source domain and then applying it to a different target domain, results in a sharp performance decline of the eventual classifier for many source-target domain pairs. Moreover, in some NLP tasks, the output categories substantially differ between domains, making adaptation even more challenging. This, for example, happens in the task of aspect extraction, where the aspects of interest of reviews of, e.g., restaurants or electronic devices may be very different. This paper presents a new fine-tuning scheme for BERT, which aims to address the above challenges. We name this scheme DILBERT: Domain Invariant Learning with BERT, and customize it for aspect extraction in the unsupervised domain adaptation setting. DILBERT harnesses the categorical information of both the source and the target domains to guide the pre-training process towards a more domain and category invariant representation, thus closing the gap between the domains. We show that DILBERT yields substantial improvements over state-ofthe-art baselines while using a fraction of the unlabeled data, particularly in more challenging domain adaptation setups. 1","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"We would like to thank the members of the IE@Technion NLP group for their valuable feedback and advice. This research was partially funded by an ISF personal grant No. 1625\/18.","year":2021,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} +{"ID":"habash-2008-four","url":"https:\/\/aclanthology.org\/P08-2015","title":"Four Techniques for Online Handling of Out-of-Vocabulary Words in Arabic-English Statistical Machine Translation","abstract":"We present four techniques for online handling of Out-of-Vocabulary words in Phrasebased Statistical Machine Translation. The techniques use spelling expansion, morphological expansion, dictionary term expansion and proper name transliteration to reuse or extend a phrase table. We compare the performance of these techniques and combine them. Our results show a consistent improvement over a state-of-the-art baseline in terms of BLEU and a manual error analysis.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2008,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} +{"ID":"passban-etal-2018-improving","url":"https:\/\/aclanthology.org\/N18-1006","title":"Improving Character-Based Decoding Using Target-Side Morphological Information for Neural Machine Translation","abstract":"Recently, neural machine translation (NMT) has emerged as a powerful alternative to conventional statistical approaches. However, its performance drops considerably in the presence of morphologically rich languages (MRLs). Neural engines usually fail to tackle the large vocabulary and high out-of-vocabulary (OOV) word rate of MRLs. Therefore, it is not suitable to exploit existing word-based models to translate this set of languages. In this paper, we propose an extension to the state-of-the-art model of Chung et al. (2016), which works at the character level and boosts the decoder with target-side morphological information. In our architecture, an additional morphology table is plugged into the model. Each time the decoder samples from a target vocabulary, the table sends auxiliary signals from the most relevant affixes in order to enrich the decoder's current state and constrain it to provide better predictions. We evaluated our model to translate English into German, Russian, and Turkish as three MRLs and observed significant improvements.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"We thank our anonymous reviewers for their valuable feedback, as well as the Irish centre for highend computing (www.ichec.ie) for providing computational infrastructures. This work has been supported by the ADAPT Centre for Digital Content Technology which is funded under the SFI Research Centres Programme (Grant 13\/RC\/2106) and is co-funded under the European Regional Development Fund.","year":2018,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} +{"ID":"ninomiya-etal-2002-indexing","url":"https:\/\/aclanthology.org\/C02-2024","title":"An Indexing Scheme for Typed Feature Structures","abstract":"This paper describes an indexing substrate for typed feature structures (ISTFS), which is an efficient retrieval engine for typed feature structures. Given a set of typed feature structures, the ISTFS efficiently retrieves its subset whose elements are unifiable or in a subsumption relation with a query feature structure. The efficiency of the ISTFS is achieved by calculating a unifiability checking table prior to retrieval and finding the best index paths dynamically. * This research is partially funded by JSPS Research Fellowship for Young Scientists. FSPAT H(\u03c0, F) = F PV (\u03c0) PV (\u03c0) = the least feature structure where path \u03c0 is defined That is, FollowedType(\u03c0, F) might be defined even if \u03c0 does not exist in F.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2002,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} +{"ID":"becquin-2020-end","url":"https:\/\/aclanthology.org\/2020.nlposs-1.4","title":"End-to-end NLP Pipelines in Rust","abstract":"The recent progress in natural language processing research has been supported by the development of a rich open source ecosystem in Python. Libraries allowing NLP practitioners but also non-specialists to leverage stateof-the-art models have been instrumental in the democratization of this technology. The maturity of the open-source NLP ecosystem however varies between languages. This work proposes a new open-source library aimed at bringing state-of-the-art NLP to Rust. Rust is a systems programming language for which the foundations required to build machine learning applications are available but still lacks readyto-use, end-to-end NLP libraries. The proposed library, rust-bert, implements modern language models and ready-to-use pipelines (for example translation or summarization). This allows further development by the Rust community from both NLP experts and nonspecialists. It is hoped that this library will accelerate the development of the NLP ecosystem in Rust. The library is under active development and available at https:\/\/github. com\/guillaume-be\/rust-bert.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2020,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} +{"ID":"agrawal-etal-2021-assessing","url":"https:\/\/aclanthology.org\/2021.naacl-main.91","title":"Assessing Reference-Free Peer Evaluation for Machine Translation","abstract":"Reference-free evaluation has the potential to make machine translation evaluation substantially more scalable, allowing us to pivot easily to new languages or domains. It has been recently shown that the probabilities given by a large, multilingual model can achieve state of the art results when used as a reference-free metric. We experiment with various modifications to this model, and demonstrate that by scaling it up we can match the performance of BLEU. We analyze various potential weaknesses of the approach, and find that it is surprisingly robust and likely to offer reasonable performance across a broad spectrum of domains and different system qualities.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"We thank Julia Kreutzer, Ciprian Chelba, Aditya Siddhant, and the anonymous reviewers for their helpful and constructive comments.","year":2021,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} +{"ID":"kumar-etal-2020-nurse","url":"https:\/\/aclanthology.org\/2020.tacl-1.32","title":"Nurse is Closer to Woman than Surgeon? Mitigating Gender-Biased Proximities in Word Embeddings","abstract":"Word embeddings are the standard model for semantic and syntactic representations of words. Unfortunately, these models have been shown to exhibit undesirable word associations resulting from gender, racial, and religious biases. Existing post-processing methods for debiasing word embeddings are unable to mitigate gender bias hidden in the spatial arrangement of word vectors. In this paper, we propose RAN-Debias, a novel gender debiasing methodology that not only eliminates the bias present in a word vector but also alters the spatial distribution of its neighboring vectors, achieving a bias-free setting while maintaining minimal semantic offset. We also propose a new bias evaluation metric, Gender-based Illicit Proximity Estimate (GIPE), which measures the extent of undue proximity in word vectors resulting from the presence of gender-based predilections. Experiments based on a suite of evaluation metrics show that RAN-Debias significantly outperforms the state-of-the-art in reducing proximity bias (GIPE) by at least 42.02%. It also reduces direct bias, adding minimal semantic disturbance, and achieves the best performance in a downstream application task (coreference resolution).","label_nlp4sg":1,"task":[],"method":[],"goal1":"Gender Equality","goal2":null,"goal3":null,"acknowledgments":"The work was partially supported by the Ramanujan Fellowship, DST (ECR\/2017\/00l691). T. Chakraborty would like to acknowledge the support of the Infosys Center for AI, IIIT-Delhi.","year":2020,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":1,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} +{"ID":"saharia-etal-2009-part","url":"https:\/\/aclanthology.org\/P09-2009","title":"Part of Speech Tagger for Assamese Text","abstract":"Assamese is a morphologically rich, agglutinative and relatively free word order Indic language. Although spoken by nearly 30 million people, very little computational linguistic work has been done for this language. In this paper, we present our work on part of speech (POS) tagging for Assamese using the well-known Hidden Markov Model. Since no well-defined suitable tagset was available, we develop a tagset of 172 tags in consultation with experts in linguistics. For successful tagging, we examine relevant linguistic issues in Assamese. For unknown words, we perform simple morphological analysis to determine probable tags. Using a manually tagged corpus of about 10000 words for training, we obtain a tagging accuracy of nearly 87% for test inputs.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2009,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} +{"ID":"xianwei-etal-2021-emotion","url":"https:\/\/aclanthology.org\/2021.ccl-1.82","title":"Emotion Classification of COVID-19 Chinese Microblogs based on the Emotion Category Description","abstract":"Emotion classification of COVID-19 Chinese microblogs helps analyze the public opinion triggered by COVID-19. Existing methods only consider the features of the microblog itself, without combining the semantics of emotion categories for modeling. Emotion classification of microblogs is a process of reading the content of microblogs and combining the semantics of emotion categories to understand whether it contains a certain emotion. Inspired by this, we propose an emotion classification model based on the emotion category description for COVID-19 Chinese microblogs. Firstly, we expand all emotion categories into formalized category descriptions. Secondly, based on the idea of question answering, we construct a question for each microblog in the form of 'What is the emotion expressed in the text X?' and regard all category descriptions as candidate answers. Finally, we construct a question-and-answer pair and use it as the input of the BERT model to complete emotion classification. By integrating rich contextual and category semantics, the model can better understand the emotion of microblogs. Experiments on the COVID-19 Chinese microblog dataset show that our approach outperforms many existing emotion classification methods, including the BERT baseline.","label_nlp4sg":1,"task":[],"method":[],"goal1":"Peace, Justice and Strong Institutions","goal2":null,"goal3":null,"acknowledgments":null,"year":2021,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":1,"sdg17":0} +{"ID":"gilbert-carl-2021-word","url":"https:\/\/aclanthology.org\/2021.motra-1.8","title":"Word Alignment Dissimilarity Indicator: Alignment Links as Conceptualizations of a Focused Bilingual Lexicon","abstract":"Starting from the assumption that different word alignments of translations represent differing conceptualizations of crosslingual equivalence, we assess the variation of six different alignment methods for English-to-Spanish translated and postedited texts. We develop a word alignment dissimilarity indicator (WADI) and compare it to traditional segment-based alignment error rate (AER). We average the WADI scores over the possible 15 different pairings of the six alignment methods for each source token and correlate the averaged WADI scores with translation process and product measures, including production duration, number of insertions, and word translation entropy. Results reveal modest correlations between WADI and production duration and insertions, as well as a moderate correlation between WADI and word translation entropy. This shows that differences in alignment decisions reflect on variation in translation decisions and demonstrates that aggregate WADI score could be used as a word-level feature to estimate post-editing difficulty.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2021,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} +{"ID":"zhong-etal-2021-useradapter","url":"https:\/\/aclanthology.org\/2021.findings-acl.129","title":"UserAdapter: Few-Shot User Learning in Sentiment Analysis","abstract":"Adapting a model to a handful of personalized data is challenging, especially when it has gigantic parameters, such as a Transformerbased pretrained model. The standard way of fine-tuning all the parameters necessitates storing a huge model for each user. In this work, we introduce a lightweight approach dubbed UserAdapter, which clamps hundred millions of parameters of the Transformer model and optimizes a tiny user-specific vector. We take sentiment analysis as a test bed, and collect datasets of reviews from Yelp and IMDB respectively. Results show that, on both datasets, UserAdapter achieves better accuracy than the standard fine-tuned Transformerbased pre-trained model. More importantly, UserAdapter offers an efficient way to produce a personalized Transformer model with less than 0.5% parameters added for each user.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"Wanjun Zhong, Jiahai Wang and Jian Yin are supported by the National Natural Science Foundation of China (U1711262, U1711261, U1811264, U1811261, U1911203 ,U2001211), Guangdong Basic and Applied Basic Research Foundation (2019B1515130001), Key R&D Program of Guangdong Province (2018B010107005). The corresponding author is Jian Yin.","year":2021,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} +{"ID":"goerz-beckstein-1983-parse","url":"https:\/\/aclanthology.org\/E83-1019","title":"How to Parse Gaps in Spoken Utterances","abstract":"We describe GLP, a chart parser that will be used as a SYNTAX module of the Erlangen Speech Understanding System. GLP realizes an agenda-based multiprocessing scheme, which allows easily to apply various parsing strategies in a transparent way. We discuss which features have been incorporated into the parser in order to process speech data, in particular the ability to perform direction independent island parsing, to handle gaps in the utterance and its hypothesis scoring scheme.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":1983,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} +{"ID":"adiga-etal-2021-automatic","url":"https:\/\/aclanthology.org\/2021.findings-acl.447","title":"Automatic Speech Recognition in Sanskrit: A New Speech Corpus and Modelling Insights","abstract":"Automatic speech recognition (ASR) in Sanskrit is interesting, owing to the various linguistic peculiarities present in the language. The Sanskrit language is lexically productive, undergoes euphonic assimilation of phones at the word boundaries and exhibits variations in spelling conventions and in pronunciations. In this work, we propose the first large scale study of automatic speech recognition (ASR) in Sanskrit, with an emphasis on the impact of unit selection in Sanskrit ASR. In this work, we release a 78 hour ASR dataset for Sanskrit, which faithfully captures several of the linguistic characteristics expressed by the language. We investigate the role of different acoustic model and language model units in ASR systems for Sanskrit. We also propose a new modelling unit, inspired by the syllable level unit selection, that captures character sequences from one vowel in the word to the next vowel. We also highlight the importance of choosing graphemic representations for Sanskrit and show the impact of this choice on word error rates (WER). Finally, we extend these insights from Sanskrit ASR for building ASR systems in two other Indic languages, Gujarati and Telugu. For both these languages, our experimental results show that the use of phonetic based graphemic representations in ASR results in performance improvements as compared to ASR systems that use native scripts. 1 * Joint first author 1 Dataset and code can be accessed from www.cse.iitb.ac.in\/~asr and https:\/\/github. com\/cyfer0618\/Vaksanca.git.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"We would like to thank Prof. K. Ramasubramanian, IIT Bombay, for supporting the creation of Sanskrit speech corpus. We express our gratitude to the volunteers who have participated in recording readings of classical Sanskrit texts and helping make this resource available for the purpose of research.","year":2021,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} +{"ID":"antunes-mendes-2014-evaluation","url":"http:\/\/www.lrec-conf.org\/proceedings\/lrec2014\/pdf\/1197_Paper.pdf","title":"An evaluation of the role of statistical measures and frequency for MWE identification","abstract":"We report on an experiment to evaluate the role of statistical association measures and frequency for the identification of MWE. We base our evaluation on a lexicon of 14.000 MWE comprising different types of word combinations: collocations, nominal compounds, light verbs + predicate, idioms, etc. These MWE were manually validated from a list of n-grams extracted from a 50 million word corpus of Portuguese (a subcorpus of the Reference Corpus of Contemporary Portuguese), using several criteria: syntactic fixedness, idiomaticity, frequency and Mutual Information measure, although no threshold was established, either in terms of group frequency or MI. We report on MWE that were selected on the basis of their syntactic and semantics properties while the MI or both the MI and the frequency show low values, which would constitute difficult cases to establish a cutting point. We analyze the MI values of the MWE selected in our gold dataset and, for some specific cases, compare these values with two other statistical measures.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"This work was partially supported by national funds through FCT -Funda\u00e7\u00e3o para a Ci\u00eancia e Technologia, under project PEst-OE\/LIN\/UI0214\/2013. We would like to thank the anonymous reviewers for their helpful comments and suggestions.","year":2014,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} +{"ID":"caglayan-etal-2016-multimodality","url":"https:\/\/aclanthology.org\/W16-2358","title":"Does Multimodality Help Human and Machine for Translation and Image Captioning?","abstract":"This paper presents the systems developed by LIUM and CVC for the WMT16 Multimodal Machine Translation challenge. We explored various comparative methods, namely phrase-based systems and attentional recurrent neural networks models trained using monomodal or multimodal data. We also performed a human evaluation in order to estimate the usefulness of multimodal data for human machine translation and image description generation. Our systems obtained the best results for both tasks according to the automatic evaluation metrics BLEU and METEOR .","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"This work was supported by the Chist-ERA project M2CR 4 . We kindly thank KyungHyun Cho and Orhan Firat for providing the DL4MT tutorial as open source and Kelvin Xu for the arcticcaptions 5 system.","year":2016,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} +{"ID":"acs-2018-bme","url":"https:\/\/aclanthology.org\/K18-3016","title":"BME-HAS System for CoNLL--SIGMORPHON 2018 Shared Task: Universal Morphological Reinflection","abstract":"This paper presents an encoder-decoder neural network based solution for both subtasks of the CoNLL-SIGMORPHON 2018 Shared Task: Universal Morphological Reinflection. All of our models are sequence-to-sequence neural networks with multiple encoders and a single decoder.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2018,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} +{"ID":"fissaha-haller-2003-application","url":"https:\/\/aclanthology.org\/2003.mtsummit-semit.7","title":"Application of corpus-based techniques to Amharic texts","abstract":"A number of corpus-based techniques have been used in the development of natural language processing application. One area in which these techniques have extensively been applied is lexical development. The current work is being undertaken in the context of a machine translation project in which lexical development activities constitute a significant portion of the overall task. In the first part, we applied corpus-based techniques to the extraction of collocations from Amharic text corpus. Analysis of the output reveals important collocations that can usefully be incorporated in the lexicon. This is especially true for the extraction of idiomatic expressions. The patterns of idiom formation which are observed in a small manually collected data enabled extraction of large set of idioms which otherwise may be difficult or impossible to recognize. Furthermore, preliminary results of other corpus-based techniques, that is, clustering and classification, that are currently being under investigation are presented. The results show that clustering performed no better than the frequency base line whereas classification showed a clear performance improvement over the frequency base line. This in turn suggests the need to carry out further experiments using large sets of data and more contextual information.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2003,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} +{"ID":"yue-zhou-2020-phicon","url":"https:\/\/aclanthology.org\/2020.clinicalnlp-1.23","title":"PHICON: Improving Generalization of Clinical Text De-identification Models via Data Augmentation","abstract":"De-identification is the task of identifying protected health information (PHI) in the clinical text. Existing neural de-identification models often fail to generalize to a new dataset. We propose a simple yet effective data augmentation method PHICON to alleviate the generalization issue. PHICON consists of PHI augmentation and Context augmentation, which creates augmented training corpora by replacing PHI entities with named-entities sampled from external sources, and by changing background context with synonym replacement or random word insertion, respectively. Experimental results on the i2b2 2006 and 2014 deidentification challenge datasets show that PH-ICON can help three selected de-identification models boost F1-score (by at most 8.6%) on cross-dataset test. We also discuss how much augmentation to use and how each augmentation method influences the performance. 1 3 https:\/\/portal.dbmi.hms.harvard.edu\/ projects\/n2c2-nlp\/","label_nlp4sg":1,"task":[],"method":[],"goal1":"Good Health and Well-Being","goal2":"Peace, Justice and Strong Institutions","goal3":null,"acknowledgments":"We thank Prof. Kwong-Sak LEUNG and Sunny Lai in The Chinese University of Hong Kong as well as anonymous reviewers for their helpful comments.","year":2020,"sdg1":0,"sdg2":0,"sdg3":1,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":1,"sdg17":0} +{"ID":"lin-etal-2019-kcat","url":"https:\/\/aclanthology.org\/P19-3017","title":"KCAT: A Knowledge-Constraint Typing Annotation Tool","abstract":"Fine-grained Entity Typing is a tough task which suffers from noise samples extracted from distant supervision. Thousands of manually annotated samples can achieve greater performance than millions of samples generated by the previous distant supervision method. Whereas, it's hard for human beings to differentiate and memorize thousands of types, thus making large-scale human labeling hardly possible. In this paper, we introduce a Knowledge-Constraint Typing Annotation Tool (KCAT 1), which is efficient for fine-grained entity typing annotation. KCAT reduces the size of candidate types to an acceptable range for human beings through entity linking and provides a Multi-step Typing scheme to revise the entity linking result. Moreover, KCAT provides an efficient Annotator Client to accelerate the annotation process and a comprehensive Manager Module to analyse crowdsourcing annotations. Experiment shows that KCAT can significantly improve annotation efficiency, the time consumption increases slowly as the size of type set expands.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2019,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} +{"ID":"reckman-etal-2011-extracting","url":"https:\/\/aclanthology.org\/W11-0126","title":"Extracting aspects of determiner meaning from dialogue in a virtual world environment","abstract":"We use data from a virtual world game for automated learning of words and grammatical constructions and their meanings. The language data are an integral part of the social interaction in the game and consist of chat dialogue, which is only constrained by the cultural context, as set by the nature of the provided virtual environment. Building on previous work, where we extracted a vocabulary for concrete objects in the game by making use of the non-linguistic context, we now target NP\/DP grammar, in particular determiners. We assume that we have captured the meanings of a set of determiners if we can predict which determiner will be used in a particular context. To this end we train a classifier that predicts the choice of a determiner on the basis of features from the linguistic and non-linguistic context. 'soup' 'vegetable soup' 'soup du jour' 'soup de jour' SALAD 'salad' 'cobb salad' SPAGHETTI 'spaghetti' 'spaghetti marinara' FILET 'steak' 'filet' 'filet mignon' SALMON 'salmon' 'grilled salmon' LOBSTER 'lobster' 'lobster thermador' CHEESECAKE 'cheesecake' 'cheese' 'cake' 'cherry cheesecake' 'cheese cake' PIE 'pie' 'berry pie' TART 'tart' 'nectarine tart' drink type referring expressions WATER 'water' TEA 'tea' COFFEE 'coffee' BEER 'beer' REDWINE 'red' 'wine' 'red wine' WHITEWINE 'white' 'white wine' item type referring expressions MENU 'menu' BILL 'bill' 'check'","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"This research was funded by a Rubicon grant from the Netherlands Organisation for Scientific Research (NWO), project nr. 446-09-011.","year":2011,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} +{"ID":"rao-etal-2021-stanker","url":"https:\/\/aclanthology.org\/2021.emnlp-main.269","title":"STANKER: Stacking Network based on Level-grained Attention-masked BERT for Rumor Detection on Social Media","abstract":"Rumor detection on social media puts pretrained language models (LMs), such as BERT, and auxiliary features, such as comments, into use. However, on the one hand, rumor detection datasets in Chinese companies with comments are rare; on the other hand, intensive interaction of attention on Transformer-based models like BERT may hinder performance improvement. To alleviate these problems, we build a new Chinese microblog dataset named Weibo20 1 by collecting posts and associated comments from Sina Weibo and propose a new ensemble named STANKER (Stacking neTwork bAsed-on atteNtion-masKed BERT). STANKER adopts two level-grained attentionmasked BERT (LGAM-BERT) models as base encoders. Unlike the original BERT, our new LGAM-BERT model takes comments as important auxiliary features and masks coattention between posts and comments on lower-layers. Experiments on Weibo20 and three existing social media datasets showed that STANKER outperformed all compared models, especially beating the old state-of-theart on Weibo dataset.","label_nlp4sg":1,"task":[],"method":[],"goal1":"Peace, Justice and Strong Institutions","goal2":null,"goal3":null,"acknowledgments":"This paper is supported by Guangdong Basic and Applied Basic Research Foundation, China (Grant No. 2021A1515012556).","year":2021,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":1,"sdg17":0} +{"ID":"nieto-pina-johansson-2016-embedding","url":"https:\/\/aclanthology.org\/W16-1401","title":"Embedding Senses for Efficient Graph-based Word Sense Disambiguation","abstract":"We propose a simple graph-based method for word sense disambiguation (WSD) where sense and context embeddings are constructed by applying the Skip-gram method to random walks over the sense graph. We used this method to build a WSD system for Swedish using the SALDO lexicon, and evaluated it on six different annotated test sets. In all cases, our system was several orders of magnitude faster than a state-of-the-art PageRank-based system, while outperforming a random baseline soundly.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"This research was funded by the Swedish Research Council under grant 2013-4944.","year":2016,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} +{"ID":"kabbach-ribeyre-2016-valencer","url":"https:\/\/aclanthology.org\/C16-2033","title":"Valencer: an API to Query Valence Patterns in FrameNet","abstract":"This paper introduces Valencer: a RESTful API to search for annotated sentences matching a given combination of syntactic realizations of the arguments of a predicate-also called valence pattern-in the FrameNet database. The API takes as input an HTTP GET request specifying a valence pattern and outputs a list of exemplifying annotated sentences in JSON format. The API is designed to be modular and language-independent, and can therefore be easily integrated to other (NLP) server-side or client-side applications, as well as non-English FrameNet projects.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2016,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} +{"ID":"li-etal-2021-future","url":"https:\/\/aclanthology.org\/2021.emnlp-main.422","title":"The Future is not One-dimensional: Complex Event Schema Induction by Graph Modeling for Event Prediction","abstract":"Event schemas encode knowledge of stereotypical structures of events and their connections. As events unfold, schemas are crucial to act as a scaffolding. Previous work on event schema induction focuses either on atomic events or linear temporal event sequences, ignoring the interplay between events via arguments and argument relations. We introduce a new concept of Temporal Complex Event Schema: a graph-based schema representation that encompasses events, arguments, temporal connections and argument relations. In addition, we propose a Temporal Event Graph Model that predicts event instances following the temporal complex event schema. To build and evaluate such schemas, we release a new schema learning corpus containing 6,399 documents accompanied with event graphs, and we have manually constructed gold-standard schemas. Intrinsic evaluations by schema matching and instance graph perplexity, prove the superior quality of our probabilistic graph schema library compared to linear representations. Extrinsic evaluation on schema-guided future event prediction further demonstrates the predictive power of our event graph model, significantly outperforming human schemas and baselines by more than 23.8% on","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"This research is based upon work supported by U.S. DARPA KAIROS Program Nos. FA8750-19-2-1004 and Air Force No. FA8650-17-C-7715. The views and conclusions contained herein are those of the authors and should not be interpreted as necessarily representing the official policies, either expressed or implied, of DARPA, or the U.S. Government. The U.S. Government is authorized to reproduce and distribute reprints for governmental purposes notwithstanding any copyright annotation therein.","year":2021,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} +{"ID":"havrylov-etal-2019-cooperative","url":"https:\/\/aclanthology.org\/N19-1115","title":"Cooperative Learning of Disjoint Syntax and Semantics","abstract":"There has been considerable attention devoted to models that learn to jointly infer an expression's syntactic structure and its semantics. Yet, Nangia and Bowman (2018) has recently shown that the current best systems fail to learn the correct parsing strategy on mathematical expressions generated from a simple context-free grammar. In this work, we present a recursive model inspired by Choi et al. (2018) that reaches near perfect accuracy on this task. Our model is composed of two separated modules for syntax and semantics. They are cooperatively trained with standard continuous and discrete optimisation schemes. Our model does not require any linguistic structure for supervision, and its recursive nature allows for out-of-domain generalisation. Additionally, our approach performs competitively on several natural language tasks, such as Natural Language Inference and Sentiment Analysis.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"We would like to thank Alexander Koller, Ivan Titov, Wilker Aziz and anonymous reviewers for their helpful suggestions and comments.","year":2019,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} +{"ID":"bouma-1992-feature","url":"https:\/\/aclanthology.org\/J92-2003","title":"Feature Structures and Nonmonotonicity","abstract":"Unification-based grammar formalisms use feature structures to represent linguistic knowledge. The only operation defined on feature structures, unification, is information-combining and monotonic. Several authors have proposed nonmonotonic extensions of this formalism, as for a linguistically adequate description of certain natural language phenomena some kind of default reasoning seems essential. We argue that the effect of these proposals can be captured by means of one general, nonmonotonic, operation on feature structures, called default unification. We provide a formal semantics of the operation and demonstrate how some of the phenomena used to motivate nonmonotonic extensions of unification-based formalisms can be handled.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"A syntactic approach to default unification is presented in Bouma (1990) . The reactions on that paper made it clear to me that default unification should be defined not only for feature structure descriptions, but also for feature structures themselves. For helpful questions, suggestions, and comments on the material presented here, I would like to thank Bob Carpenter, John Nerbonne, audiences in Tilburg, Groningen, Tiibingen, and Dhsseldorf, and three anonymous CL reviewers.","year":1992,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} +{"ID":"etchegoyhen-gete-2020-handle","url":"https:\/\/aclanthology.org\/2020.lrec-1.469","title":"Handle with Care: A Case Study in Comparable Corpora Exploitation for Neural Machine Translation","abstract":"We present the results of a case study in the exploitation of comparable corpora for Neural Machine Translation. A large comparable corpus for Basque-Spanish was prepared, on the basis of independently-produced news by the Basque public broadcaster , and we discuss the impact of various techniques to exploit the original data in order to determine optimal variants of the corpus. In particular, we show that filtering in terms of alignment thresholds and length-difference outliers has a significant impact on translation quality. The impact of tags identifying comparable data in the training datasets is also evaluated, with results indicating that this technique might be useful to help the models discriminate noisy information, in the form of informational imbalance between aligned sentences. The final corpus was prepared according to the experimental results and is made available to the scientific community for research purposes.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"This work was supported by the Department of Economic Development and Competitiveness of the Basque Government, via the and projects. We wish to thank the Basque public broadcasting organisation for their support and their willingness to share the corpus with the community.","year":2020,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} +{"ID":"lewis-etal-2017-integrating","url":"https:\/\/aclanthology.org\/W17-1607","title":"Integrating the Management of Personal Data Protection and Open Science with Research Ethics","abstract":"This paper examines the impact of the EU General Data Protection Regulation, in the context of the requirement from many research funders to provide open access research data, on current practices in Language Technology Research. We analyse the challenges that arise and the opportunities to address many of them through the use of existing open data practices for sharing language research data. We discuss the impact of this also on current practice in academic and industrial research ethics.","label_nlp4sg":1,"task":[],"method":[],"goal1":"Peace, Justice and Strong Institutions","goal2":null,"goal3":null,"acknowledgments":"Supported by the ADAPT Centre for Digital Content Technology which is funded under the SFI Research Centres Programme (Grant 13\/RC\/2106) and is co-funded under the European Regional Development Fund.","year":2017,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":1,"sdg17":0} +{"ID":"bird-klein-1994-phonological","url":"https:\/\/aclanthology.org\/J94-3010","title":"Phonological Analysis in Typed Feature Systems","abstract":"Research on constraint-based grammar frameworks has focused on syntax and semantics largely to the exclusion of phonology. Likewise, current developments in phonology have generally ignored the technical and linguistic innovations available in these frameworks. In this paper we suggest some strategies for reuniting phonology and the rest of grammar in the context of a uniform constraint formalism. We explain why this is a desirable goal, and we present some conservative extensions to current practice in computational linguistics and in nonlinear phonology that we believe are necessary and sufficient for achieving this goal. We begin by exploring the application of typed feature logic to phonology and propose a system of prosodic types. Next, taking HPSG as an exemplar of the grammar frameworks we have in mind, we show how the phonology attribute can be enriched so that it can encode multi-tiered, hierarchical phonological representations. Finally, we exemplify the approach in some detail for the nonconcatenative morphology of Sierra Miwok and for schwa alternation in French. The approach taken in this paper lends itself particularly well to capturing phonological generalizations in terms of high-level prosodic constraints.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"This research is funded by the U.K. Science and Engineering Research Council, under grant GR\/G-22084 Computational Phonology: A Constraint-Based Approach, and has been carried out as part of the research program","year":1994,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} +{"ID":"brodda-1994-automatic","url":"https:\/\/aclanthology.org\/W93-0404","title":"Automatic Tagging of Turns in the London-Lund Corpus with Respect to Type of Turn","abstract":"B en n y B ro d d a S tock h olm 0. A b stra ct. In this paper a fully automatic tagging system for the dialogue texts in the London-Lund corpus, LLC, will be presented. The units that receive tags are \"turns\"; a collection of (not necessarily connected) tone units-the basic record in the corpus-that one speaker produces while being either the \"floor holder\" or the \"listener\"; the quoted concepts are defined below. The tags constitute a classification of each turn according to \"type of turn\". A little sample of tagged text appears in Appendix 1, and is commented on in the text. The texts to be tagged will in the end comprise all the texts in the three subcorpora of LLC appearing in Svartvik & Quirk, \"A Corpus of English Conversation\", (=CEC); so far, about half of these texts have been tagged, now with the programs working properly, the rest will hopefully be tagged before the end of this year.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":1994,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} +{"ID":"islamaj-dogan-etal-2017-biocreative","url":"https:\/\/aclanthology.org\/W17-2321","title":"BioCreative VI Precision Medicine Track: creating a training corpus for mining protein-protein interactions affected by mutations","abstract":"The Precision Medicine Track in BioCreative VI aims to bring together the BioNLP community for a novel challenge focused on mining the biomedical literature in search of mutations and protein-protein interactions (PPI). In order to support this track with an effective training dataset with limited curator time, the track organizers carefully reviewed PubMed articles from two different sources: curated public PPI databases, and the results of state-of-the-art public text mining tools. We detail here the data collection, manual review and annotation process and describe this training corpus characteristics. We also describe a corpus performance baseline. This analysis will provide useful information to developers and researchers for comparing and developing innovative text mining approaches for the BioCreative VI challenge and other Precision Medicine related applications.","label_nlp4sg":1,"task":[],"method":[],"goal1":"Good Health and Well-Being","goal2":null,"goal3":null,"acknowledgments":null,"year":2017,"sdg1":0,"sdg2":0,"sdg3":1,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} +{"ID":"thomas-etal-1998-extracting","url":"https:\/\/aclanthology.org\/W98-1222","title":"Extracting Phoneme Pronunciation Information from Corpora","abstract":"We present a procedure that determines a set of phonemes possibly intended by a speaker from a recognized or uttered phone. This information will be used to allow a speech recognizer to take pronunciation into account or to consider input from a noisy source during lexical access. We investigate the hypothesis that different pronunciations of a phone occur within groups of sounds physically produced the same way, and use the Minimum Message Length principle to consider the effect of a phoneme's context on its pronunciation.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"The authors thank Jon Oliver and Chris Wallace for their advice on MML encoding.","year":1998,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} +{"ID":"mellish-1989-chart","url":"https:\/\/aclanthology.org\/P89-1013","title":"Some Chart-Based Techniques for Parsing Ill-Formed Input","abstract":"We argue for the usefulness of an active chart as the basis of a system that searches for the globally most plausible explanation of failure to syntactically parse a given input. We suggest semantics-free, grammarindependent techniques for parsing inputs displaying simple kinds of ill-formedness and discuss the search issues involved.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"This work was done in conjunction with the SERC-supported project GR\/D\/16130. I am currently supported by an SERC Advanced Fellowship.","year":1989,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} +{"ID":"graca-2018-unbabel","url":"https:\/\/aclanthology.org\/W18-2103","title":"Unbabel: How to combine AI with the crowd to scale professional-quality translation","abstract":"Proceedings for AMTA 2018 Workshop: Translation Quality Estimation and Automatic Post-Editing Boston, March 21, 2018 | Page 44\nProceedings for AMTA 2018 Workshop: Translation Quality Estimation and Automatic Post-Editing Boston, March 21, 2018 | Page 45","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2018,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} +{"ID":"stenger-etal-2020-incomslav","url":"https:\/\/aclanthology.org\/2020.cllrd-1.6","title":"The INCOMSLAV Platform: Experimental Website with Integrated Methods for Measuring Linguistic Distances and Asymmetries in Receptive Multilingualism","abstract":"We report on a web-based resource for conducting intercomprehension experiments with native speakers of Slavic languages and present our methods for measuring linguistic distances and asymmetries in receptive multilingualism. Through a website which serves as a platform for online testing, a large number of participants with different linguistic backgrounds can be targeted. A statistical language model is used to measure information density and to gauge how language users master various degrees of (un)intelligibilty. The key idea is that intercomprehension should be better when the model adapted for understanding the unknown language exhibits relatively low average distance and surprisal. All obtained intelligibility scores together with distance and asymmetry measures for the different language pairs and processing directions are made available as an integrated online resource in the form of a Slavic intercomprehension matrix (SlavMatrix).","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"We wish to thank Hasan Alam for his support in the implementation of the SlavMatrix. This work has been funded by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) -Project-ID 232722074 -SFB 1102.","year":2020,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} +{"ID":"kaji-kitsuregawa-2007-building","url":"https:\/\/aclanthology.org\/D07-1115","title":"Building Lexicon for Sentiment Analysis from Massive Collection of HTML Documents","abstract":"Recognizing polarity requires a list of polar words and phrases. For the purpose of building such lexicon automatically, a lot of studies have investigated (semi-) unsupervised method of learning polarity of words and phrases. In this paper, we explore to use structural clues that can extract polar sentences from Japanese HTML documents, and build lexicon from the extracted polar sentences. The key idea is to develop the structural clues so that it achieves extremely high precision at the cost of recall. In order to compensate for the low recall, we used massive collection of HTML documents. Thus, we could prepare enough polar sentence corpus.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2007,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} +{"ID":"bouscarrat-etal-2021-amu","url":"https:\/\/aclanthology.org\/2021.case-1.21","title":"AMU-EURANOVA at CASE 2021 Task 1: Assessing the stability of multilingual BERT","abstract":"This paper explains our participation in task 1 of the CASE 2021 shared task. This task is about multilingual event extraction from news. We focused on sub-task 4, event information extraction. This sub-task has a small training dataset and we fine-tuned a multilingual BERT to solve this sub-task. We studied the instability problem on the dataset and tried to mitigate it.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"We thank Damien Fourrure, Arnaud Jacques, Guillaume Stempfel and our anonymous reviewers for their helpful comments.","year":2021,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} +{"ID":"zhu-etal-2020-crosswoz","url":"https:\/\/aclanthology.org\/2020.tacl-1.19","title":"CrossWOZ: A Large-Scale Chinese Cross-Domain Task-Oriented Dialogue Dataset","abstract":"To advance multi-domain (cross-domain) dialogue modeling as well as alleviate the shortage of Chinese task-oriented datasets, we propose CrossWOZ, the first large-scale Chinese Cross-Domain Wizard-of-Oz task-oriented dataset. It contains 6K dialogue sessions and 102K utterances for 5 domains, including hotel, restaurant, attraction, metro, and taxi. Moreover, the corpus contains rich annotation of dialogue states and dialogue acts on both user and system sides. About 60% of the dialogues have cross-domain user goals that favor inter-domain dependency and encourage natural transition across domains in conversation. We also provide a user simulator and several benchmark models for pipelined taskoriented dialogue systems, which will facilitate researchers to compare and evaluate their models on this corpus. The large size and rich annotation of CrossWOZ make it suitable to investigate a variety of tasks in cross-domain dialogue modeling, such as dialogue state tracking, policy learning, user simulation, etc.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"This work was supported by the National Science Foundation of China (grant no. 61936010\/ 61876096) and the National Key R&D Program of China (grant no. 2018YFC0830200). We would like to thank THUNUS NExT JointLab for the support. We would also like to thank Ryuichi Takanobu and Fei Mi for their constructive comments. We are grateful to our action editor, Bonnie Webber, and the anonymous reviewers for their valuable suggestions and feedback.","year":2020,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} +{"ID":"liu-etal-2010-improving-statistical","url":"https:\/\/aclanthology.org\/P10-1085","title":"Improving Statistical Machine Translation with Monolingual Collocation","abstract":"This paper proposes to use monolingual collocations to improve Statistical Machine Translation (SMT). We make use of the collocation probabilities, which are estimated from monolingual corpora, in two aspects, namely improving word alignment for various kinds of SMT systems and improving phrase table for phrase-based SMT. The experimental results show that our method improves the performance of both word alignment and translation quality significantly. As compared to baseline systems, we achieve absolute improvements of 2.40 BLEU score on a phrase-based SMT system and 1.76 BLEU score on a parsing-based SMT system.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2010,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} +{"ID":"nn-1983-center","url":"https:\/\/aclanthology.org\/J83-1006","title":"Center for the Study of Language and Information","abstract":"It's a pleasure to assume the editorship of The FINITE STRING, since it is such an important resource for our discipline and its community of researchers.\nThe success of The FINITE STRING depends on two factors:","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":1983,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} +{"ID":"mckeown-2005-text","url":"https:\/\/aclanthology.org\/U05-1002","title":"Text Summarization: News and Beyond","abstract":"Redundancy in large text collections, such as the web, creates both problems and opportunities for natural language systems. On the one hand, the presence of numerous sources conveying the same information causes difficulties for end users of search engines and news providers; they must read the same information over and over again. On the other hand, redundancy can be exploited to identify important and accurate information for applications such as summarization and question answering.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2005,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} +{"ID":"nn-1990-coling","url":"https:\/\/aclanthology.org\/C90-1026","title":"COLING 90: Contents in Volumes 1-3","abstract":"The papers in each category are sorted alphabetically according to the name of the first author. The subdivision into volumes has no deep interpretation. Its sole purpose was to free Coling participants from carrying all three volumes around at all times. For convenient overview and retrieval, the titels of some papers listed below have been abridged by the editor. When quoted, each paper should preferably be cited with the heading given at the top of the paper. No attempts have been made to normalize the name forms of the authors. Spelling and transcription have been retained as used by the authors.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":1990,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} +{"ID":"sager-1981-types","url":"https:\/\/aclanthology.org\/1981.tc-1.2","title":"Types of translation and text forms in the environment of machine translation (MT)","abstract":"Human translation consists of a number of separate steps which begin with the identification of the text type, the purpose and intention of the text, the subject area, etc. As there are types of texts there are also types of translation, which do not necessarily match directly. Since the human and machine translation processes differ so must the criteria which determine translatability. What criteria are relevant for MT and can they be derived from observations of the human effort?","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":1981,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} +{"ID":"molla-etal-2007-named","url":"https:\/\/aclanthology.org\/U07-1010","title":"Named Entity Recognition in Question Answering of Speech Data","abstract":"Question answering on speech transcripts (QAst) is a pilot track of the CLEF competition. In this paper we present our contribution to QAst, which is centred on a study of Named Entity (NE) recognition on speech transcripts, and how it impacts on the accuracy of the final question answering system. We have ported AFNER, the NE recogniser of the AnswerFinder questionanswering project, to the set of answer types expected in the QAst track. AFNER uses a combination of regular expressions, lists of names (gazetteers) and machine learning to find NeWS in the data. The machine learning component was trained on a development set of the AMI corpus. In the process we identified various problems with scalability of the system and the existence of errors of the extracted annotation, which lead to relatively poor performance in general. Performance was yet comparable with state of the art, and the system was second (out of three participants) in one of the QAst subtasks.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2007,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} +{"ID":"vitorio-etal-2017-investigating","url":"https:\/\/aclanthology.org\/W17-6607","title":"Investigating Opinion Mining through Language Varieties: a Case Study of Brazilian and European Portuguese tweets","abstract":"Portuguese is a pluricentric language comprising variants that differ from each other in different linguistic levels. It is generally agreed that applying text mining resources developed for one specific variant may produce a different result in another variant, but very little research has been done to measure this difference. This study presents an analysis of opinion mining application when dealing with the two main Portuguese language variants: Brazilian and European. According to the experiments, it was observed that the differences between the Portuguese variants reflect on the application results. The use of a variant for training and another for testing brings a substantial performance drop, but the separation of the variants may not be recommended.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2017,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} +{"ID":"xiao-etal-2007-empirical","url":"https:\/\/aclanthology.org\/O07-4002","title":"An Empirical Study of Non-Stationary Ngram Model and its Smoothing Techniques","abstract":"Recently many new techniques have been proposed for language modeling, such as ME, MEMM and CRF. However, the ngram model is still a staple in practical applications. It is well worthy of studying how to improve the performance of the ngram model. This paper enhances the traditional ngram model by relaxing the stationary hypothesis on the Markov chain and exploiting the word positional information. Such an assumption is made that the probability of the current word is determined not only by history words but also by the words positions in the sentence. The non-stationary ngram model (NS ngram model) is proposed. Several related issues are discussed in detail, including the definition of the NS ngram model, the representation of the word positional information and the estimation of the conditional probability. In addition, three smoothing approaches are proposed to solve the data sparseness problem of the NS ngram model. Several smoothing algorithms are presented in each approach. In the experiments, the NS ngram model is evaluated on the pinyin-to-character conversion task which is the core technique of the Chinese text input method. Experimental results show that the NS ngram model outperforms the traditional ngram model significantly by the exploitation of the word positional information. In addition, the proposed smoothing techniques solve the data sparseness problem of the NS ngram model effectively with great error rate reduction.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"This investigation was supported by the key project of the National Natural Science We especially thank the anonymous reviewers for their valuable suggestions and comments.","year":2007,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} +{"ID":"zhang-etal-2012-learning","url":"https:\/\/aclanthology.org\/D12-1125","title":"Learning to Map into a Universal POS Tagset","abstract":"We present an automatic method for mapping language-specific part-of-speech tags to a set of universal tags. This unified representation plays a crucial role in cross-lingual syntactic transfer of multilingual dependency parsers. Until now, however, such conversion schemes have been created manually. Our central hypothesis is that a valid mapping yields POS annotations with coherent linguistic properties which are consistent across source and target languages. We encode this intuition in an objective function that captures a range of distributional and typological characteristics of the derived mapping. Given the exponential size of the mapping space, we propose a novel method for optimizing over soft mappings, and use entropy regularization to drive those towards hard mappings. Our results demonstrate that automatically induced mappings rival the quality of their manually designed counterparts when evaluated in the context of multilingual parsing. 1","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"The authors acknowledge the support of the NSF (IIS-0835445), the MURI program (W911NF-10-1-0533) and the DARPA BOLT program. We thank Tommi Jaakkola, the members of the MIT NLP group and the ACL reviewers for their suggestions and comments. Any opinions, findings, conclusions, or recommendations expressed in this paper are those of the authors, and do not necessarily reflect the views of the funding organizations.","year":2012,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} +{"ID":"chronopoulou-etal-2020-lmu","url":"https:\/\/aclanthology.org\/2020.wmt-1.128","title":"The LMU Munich System for the WMT 2020 Unsupervised Machine Translation Shared Task","abstract":"This paper describes the submission of LMU Munich to the WMT 2020 unsupervised shared task, in two language directions, German\u2194Upper Sorbian. Our core unsupervised neural machine translation (UNMT) system follows the strategy of Chronopoulou et al. (2020), using a monolingual pretrained language generation model (on German) and finetuning it on both German and Upper Sorbian, before initializing a UNMT model, which is trained with online backtranslation. Pseudoparallel data obtained from an unsupervised statistical machine translation (USMT) system is used to fine-tune the UNMT model. We also apply BPE-Dropout to the low-resource (Upper Sorbian) data to obtain a more robust system. We additionally experiment with residual adapters and find them useful in the Upper Sorbian\u2192German direction. We explore sampling during backtranslation and curriculum learning to use SMT translations in a more principled way. Finally, we ensemble our bestperforming systems and reach a BLEU score of 32.4 on German\u2192Upper Sorbian and 35.2 on Upper Sorbian\u2192German.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"This work was supported by the European Research Council (ERC) under the European Union's Horizon 2020 research and innovation programme (grant agreement No. 640550) and by the German Research Foundation (DFG; grant FR 2829\/4-1). We would like to thank Jind\u0159ich Libovick\u00fd for fruitful discussions regarding the use of BPE-Dropout as a data augmentation technique.","year":2020,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} +{"ID":"yoon-etal-2017-adullam","url":"https:\/\/aclanthology.org\/S17-2123","title":"Adullam at SemEval-2017 Task 4: Sentiment Analyzer Using Lexicon Integrated Convolutional Neural Networks with Attention","abstract":"We propose a sentiment analyzer for the prediction of document-level sentiments of English micro-blog messages from Twitter. The proposed method is based on lexicon integrated convolutional neural networks with attention (LCA). Its performance was evaluated using the datasets provided by SemEval competition (Task 4). The proposed sentiment analyzer obtained an average F1 of 55.2%, an average recall of 58.9% and an accuracy of 61.4%.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"This research was supported by Basic Science Research Program through the National Research Foundation of Korea (NRF) funded by the Ministry of Science, ICT & Future Planning (NRF-2017R1A2B4003558).","year":2017,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} +{"ID":"xu-etal-2021-temporal","url":"https:\/\/aclanthology.org\/2021.naacl-main.202","title":"Temporal Knowledge Graph Completion using a Linear Temporal Regularizer and Multivector Embeddings","abstract":"Representation learning approaches for knowledge graphs have been mostly designed for static data. However, many knowledge graphs involve evolving data, e.g., the fact (The President of the United States is Barack Obama) is valid only from 2009 to 2017. This introduces important challenges for knowledge representation learning since the knowledge graphs change over time. In this paper, we present a novel time-aware knowledge graph embebdding approach, TeLM, which performs 4th-order tensor factorization of a Temporal knowledge graph using a Linear temporal regularizer and Multivector embeddings. Moreover, we investigate the effect of the temporal dataset's time granularity on temporal knowledge graph completion. Experimental results demonstrate that our proposed models trained with the linear temporal regularizer achieve the state-of-the-art performances on link prediction over four well-established temporal knowledge graph completion benchmarks.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"This work is supported by the EC Horizon 2020 grant LAMBDA (GA no. 809965), the CLEOPA-TRA project (GA no. 812997) and the China Scholarship Council (CSC).","year":2021,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} +{"ID":"bawden-etal-2020-findings","url":"https:\/\/aclanthology.org\/2020.wmt-1.76","title":"Findings of the WMT 2020 Biomedical Translation Shared Task: Basque, Italian and Russian as New Additional Languages","abstract":"Machine translation of scientific abstracts and terminologies has the potential to support health professionals and biomedical researchers in some of their activities. In the fifth edition of the WMT Biomedical Task, we addressed a total of eight language pairs. Five language pairs were previously addressed in past editions of the shared task, namely","label_nlp4sg":1,"task":[],"method":[],"goal1":"Good Health and Well-Being","goal2":null,"goal3":null,"acknowledgments":"We would like to thank all participants in the challenges, and especially those who supported us for the manual evaluation. 22 As a reference, one of the participating systems (UTS_NLP) was able to re-run their system over the real test set. The performance drop was 0.08 for accuracy (from 0.73 to 0.65), and 0.05 for BLEU (from 0.71 to 0.66).","year":2020,"sdg1":0,"sdg2":0,"sdg3":1,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} +{"ID":"sun-etal-2009-prediction","url":"https:\/\/aclanthology.org\/P09-2064","title":"Prediction of Thematic Rank for Structured Semantic Role Labeling","abstract":"In Semantic Role Labeling (SRL), it is reasonable to globally assign semantic roles due to strong dependencies among arguments. Some relations between arguments significantly characterize the structural information of argument structure. In this paper, we concentrate on thematic hierarchy that is a rank relation restricting syntactic realization of arguments. A loglinear model is proposed to accurately identify thematic rank between two arguments. To import structural information, we employ re-ranking technique to incorporate thematic rank relations into local semantic role classification results. Experimental results show that automatic prediction of thematic hierarchy can help semantic role classification.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"This work is supported by NSFC Project 60873156, 863 High Technology Project of China 2006AA01Z144 and the project of Toshiba (China) Co., Ltd. R&D Center.","year":2009,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} +{"ID":"akbik-vollgraf-2018-zap","url":"https:\/\/aclanthology.org\/L18-1344","title":"ZAP: An Open-Source Multilingual Annotation Projection Framework","abstract":"Previous work leveraged annotation projection as a convenient method to automatically generate linguistic resources such as treebanks or propbanks for new languages. This approach automatically transfers linguistic annotation from a resource-rich source language (SL) to translations in a target language (TL). However, to the best of our knowledge, no publicly available framework for this approach currently exists, limiting researchers' ability to reproduce and compare experiments. In this paper, we present ZAP, the first open-source framework for annotation projection in parallel corpora. Our framework is Java-based and includes methods for preprocessing corpora, computing word-alignments between sentence pairs, transferring different layers of linguistic annotation, and visualization. The framework was designed for ease-of-use with lightweight APIs. We give an overview of ZAP and illustrate its usage.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"We would like to thank the anonymous reviewers for their helpful comments. This project has received funding from the European Union's Horizon 2020 research and innovation programme under grant agreement no 732328 (\"FashionBrain\").","year":2018,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} +{"ID":"gustafson-capkova-2001-interaction","url":"https:\/\/aclanthology.org\/W01-1704","title":"The interaction between local focusing structure and global intentions in spoken discourse","abstract":"The purpose of the study reported in this paper is to investigate how local focusing structure, analysed in terms of Centering Theory (Grosz, Joshi & Weinstein, 1995), and global d iscourse structure, analysed in terms of discourse segments and discourse segment purposes (Grosz & Sidner, 1986), interact. Swedish dialogue was analysed according to Centering Theory and Grosz and Sidners (1986) discourse theory. The results indicate an interaction between locally implicit elements and global intentions. Also indications concerning discourse markers varying intonation were found.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2001,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} +{"ID":"chen-etal-2020-ferryman","url":"https:\/\/aclanthology.org\/2020.semeval-1.35","title":"Ferryman at SemEval-2020 Task 3: Bert with TFIDF-Weighting for Predicting the Effect of Context in Word Similarity","abstract":"Word similarity is widely used in machine learning applications like searching engine and recommendation. Measuring the changing meaning of the same word between two different sentences is not only a way to handle complex features in word usage (such as sentence syntax and semantics), but also an important method for different word polysemy modeling. In this paper, we present the methodology proposed by team Ferryman. Our system is based on the Bidirectional Encoder Representations from Transformers (BERT) model combined with term frequency-inverse document frequency (TF-IDF), applying the method on the provided datasets called CoSimLex, which covers four different languages including English, Croatian, Slovene, and Finnish. Our team Ferryman wins the the first position for English task and the second position for Finnish in the subtask 1.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2020,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} +{"ID":"davoodi-etal-2022-modeling","url":"https:\/\/aclanthology.org\/2022.acl-long.22","title":"Modeling U.S. State-Level Policies by Extracting Winners and Losers from Legislative Texts","abstract":"Decisions on state-level policies have a deep effect on many aspects of our everyday life, such as health-care and education access. However, there is little understanding of how these policies and decisions are being formed in the legislative process. We take a datadriven approach by decoding the impact of legislation on relevant stakeholders (e.g., teachers in education bills) to understand legislators' decision-making process and votes. We build a new dataset for multiple US states that interconnects multiple sources of data including bills, stakeholders, legislators, and money donors. Next, we develop a textual graphbased model to embed and analyze state bills. Our model predicts winners\/losers of bills and then utilizes them to better determine the legislative body's vote breakdown according to demographic\/ideological criteria, e.g., gender.","label_nlp4sg":1,"task":[],"method":[],"goal1":"Peace, Justice and Strong Institutions","goal2":null,"goal3":null,"acknowledgments":"We would like to acknowledge the members of the PurdueNLP lab. We also thank the reviewers for their constructive feedback. The funding for the use of mTurk was part of the Purdue University Integrative Data Science Initiative: Data Science for Ethics, Society, and Policy Focus Area. This work was partially supported by an NSF CAREER award IIS-2048001.","year":2022,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":1,"sdg17":0} +{"ID":"egan-2012-machine","url":"https:\/\/aclanthology.org\/2012.amta-government.5","title":"Machine Translation Revisited: An Operational Reality Check","abstract":null,"label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2012,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} +{"ID":"banchs-li-2012-iris","url":"https:\/\/aclanthology.org\/P12-3007","title":"IRIS: a Chat-oriented Dialogue System based on the Vector Space Model","abstract":"This system demonstration paper presents IRIS (Informal Response Interactive System), a chat-oriented dialogue system based on the vector space model framework. The system belongs to the class of examplebased dialogue systems and builds its chat capabilities on a dual search strategy over a large collection of dialogue samples. Additional strategies allowing for system adaptation and learning implemented over the same vector model space framework are also described and discussed.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"The authors would like to thank the Institute for Infocomm Research for its support and permission to publish this work.","year":2012,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} +{"ID":"habernal-gurevych-2016-argument","url":"https:\/\/aclanthology.org\/P16-1150","title":"Which argument is more convincing? Analyzing and predicting convincingness of Web arguments using bidirectional LSTM","abstract":"We propose a new task in the field of computational argumentation in which we investigate qualitative properties of Web arguments, namely their convincingness. We cast the problem as relation classification, where a pair of arguments having the same stance to the same prompt is judged. We annotate a large datasets of 16k pairs of arguments over 32 topics and investigate whether the relation \"A is more convincing than B\" exhibits properties of total ordering; these findings are used as global constraints for cleaning the crowdsourced data. We propose two tasks: (1) predicting which argument from an argument pair is more convincing and (2) ranking all arguments to the topic based on their convincingness. We experiment with feature-rich SVM and bidirectional LSTM and obtain 0.76-0.78 accuracy and 0.35-0.40 Spearman's correlation in a cross-topic evaluation. We release the newly created corpus UKPConvArg1 and the experimental software under open licenses.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"This work has been supported by the Volkswagen Foundation as part of the Lichtenberg-Professorship Program under grant N o I\/82806, by the German Institute for Educational Research (DIPF), by the German Research Foundation (DFG) via the German-Israeli Project Cooperation (DIP, grant DA 1600\/1-1), by the GRK 1994 AIPHES (DFG), and by Amazon Web Services in Education Grant award. Lastly, we would like to thank the anonymous reviewers for their valuable feedback.","year":2016,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} +{"ID":"zhou-etal-2021-commonsense","url":"https:\/\/aclanthology.org\/2021.sigdial-1.13","title":"Commonsense-Focused Dialogues for Response Generation: An Empirical Study","abstract":"Smooth and effective communication requires the ability to perform latent or explicit commonsense inference. Prior commonsense reasoning benchmarks (such as SocialIQA and CommonsenseQA) mainly focus on the discriminative task of choosing the right answer from a set of candidates, and do not involve interactive language generation as in dialogue. Moreover, existing dialogue datasets do not explicitly focus on exhibiting commonsense as a facet. In this paper, we present an empirical study of commonsense in dialogue response generation. We first auto-extract commonsensical dialogues from existing dialogue datasets by leveraging ConceptNet, a commonsense knowledge graph. Furthermore, building on social contexts\/situations in SocialIQA, we collect a new dialogue dataset with 25K dialogues aimed at exhibiting social commonsense in an interactive setting. We evaluate response generation models trained using these datasets and find that models trained on both extracted and our collected data produce responses that consistently exhibit more commonsense than baselines. Finally we propose an approach for automatic evaluation of commonsense that relies on features derived from ConceptNet and pretrained language and dialog models, and show reasonable correlation with human evaluation of responses' commonsense quality. 1 * Work done while Pei Zhou was an intern at Amazon Alexa AI 1 Data and code will be released soon.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2021,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} +{"ID":"perera-etal-2018-building","url":"https:\/\/aclanthology.org\/W18-1402","title":"Building and Learning Structures in a Situated Blocks World Through Deep Language Understanding","abstract":"We demonstrate a system for understanding natural language utterances for structure description and placement in a situated blocks world context. By relying on a rich, domainspecific adaptation of a generic ontology and a logical form structure produced by a semantic parser, we obviate the need for an intermediate, domain-specific representation and can produce a reasoner that grounds and reasons over concepts and constraints with real-valued data. This linguistic base enables more flexibility in interpreting natural language expressions invoking intrinsic concepts and features of structures and space. We demonstrate some of the capabilities of a system grounded in deep language understanding and present initial results in a structure learning task.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"This work is supported by the DARPA CwC program and the DARPA Big Mechanism program under ARO contract W911NF-14-1-0391. Special thanks to SRI for their work in developing the physical apparatus, including block detection and avatar software.","year":2018,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} +{"ID":"mclaughlin-schwall-1998-horses","url":"https:\/\/aclanthology.org\/1998.tc-1.10","title":"Horses for Courses: Changing User Acceptance of Machine Translation","abstract":"The key to Machine Translation becoming a commonplace technology is user acceptance. Unfortunately, the decision whether or not to use Machine Translation is often made on the basis of output quality alone. As we all know, Machine Translation output is far from perfect, and its quality depends on a wide range of factors related to individual users, the environment in which they work, and the text types they work with-factors which are difficult and arduous to evaluate. Although output quality obviously plays an important role, it is not the only factor in user acceptance-and for some potential users it may not even be the most important one. User perception of Machine Translation is a decisive issue, and MT must be seen-not as a universal translation solution, but as one of several potential tools-not in isolation, but within the context of the user's work processes. This has important implications for Machine Translation vendors. It means that Machine Translation shouldn't be offered in isolation. Depending on the product\/target group, it must be combined with other tools and\/or combined with other services (postediting\/human translation). Products must also be scaled to the user's purse and environment, the entry threshold must be low and products must be upgradeable as the user's needs change. It must be easy to access and use Machine Translation: complicated access to Machine Translation and arduous preprocessing activities will make it a non-starter for many people. What's more, Machine Translation must be available when and where the user needs it, whatever the application.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":1998,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} +{"ID":"rosset-etal-2013-automatic","url":"https:\/\/aclanthology.org\/W13-2321","title":"Automatic Named Entity Pre-annotation for Out-of-domain Human Annotation","abstract":"Automatic pre-annotation is often used to improve human annotation speed and accuracy. We address here out-of-domain named entity annotation, and examine whether automatic pre-annotation is still beneficial in this setting. Our study design includes two different corpora, three pre-annotation schemes linked to two annotation levels, both expert and novice annotators, a questionnaire-based subjective assessment and a corpus-based quantitative assessment. We observe that preannotation helps in all cases, both for speed and for accuracy, and that the subjective assessment of the annotators does not always match the actual benefits measured in the annotation outcome.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"This work has been partially funded by OSEO under the Quaero program and by the French ANR VERA project.","year":2013,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} +{"ID":"jain-lapata-2021-memory","url":"https:\/\/aclanthology.org\/2021.tacl-1.71","title":"Memory-Based Semantic Parsing","abstract":"We present a memory-based model for contextdependent semantic parsing. Previous approaches focus on enabling the decoder to copy or modify the parse from the previous utterance, assuming there is a dependency between the current and previous parses. In this work, we propose to represent contextual information using an external memory. We learn a context memory controller that manages the memory by maintaining the cumulative meaning of sequential user utterances. We evaluate our approach on three semantic parsing benchmarks. Experimental results show that our model can better process context-dependent information and demonstrates improved performance without using task-specific decoders.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"We thank Mike Lewis, Miguel Ballesteros, and our anonymous reviewers for their feedback. We are grateful to Alex Lascarides and Ivan Titov for their comments on the paper. This work was supported in part by Huawei and the UKRI Centre for Doctoral Training in Natural Language Processing (grant EP\/S022481\/1). Lapata acknowledges the support of the European Research Council (award number 681760, ''Translating Multiple Modalities into Text'').","year":2021,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} +{"ID":"de-marneffe-etal-2010-good","url":"https:\/\/aclanthology.org\/P10-1018","title":"``Was It Good? It Was Provocative.'' Learning the Meaning of Scalar Adjectives","abstract":"Texts and dialogues often express information indirectly. For instance, speakers' answers to yes\/no questions do not always straightforwardly convey a 'yes' or 'no' answer. The intended reply is clear in some cases (Was it good? It was great!) but uncertain in others (Was it acceptable? It was unprecedented.). In this paper, we present methods for interpreting the answers to questions like these which involve scalar modifiers. We show how to ground scalar modifier meaning based on data collected from the Web. We learn scales between modifiers and infer the extent to which a given answer conveys 'yes' or 'no'. To evaluate the methods, we collected examples of question-answer pairs involving scalar modifiers from CNN transcripts and the Dialog Act corpus and use response distributions from Mechanical Turk workers to assess the degree to which each answer conveys 'yes' or 'no'. Our experimental results closely match the Turkers' response data, demonstrating that meanings can be learned from Web data and that such meanings can drive pragmatic inference.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"This paper is based on work funded in part by ONR award N00014-10-1-0109 and ARO MURI award 548106, as well as by the Air Force Research Laboratory (AFRL) under prime contract no. FA8750-09-C-0181. Any opinions, findings, and conclusion or recommendations expressed in this material are those of the authors and do not necessarily reflect the view of the Air Force Research Laboratory (AFRL), ARO or ONR.","year":2010,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} +{"ID":"miyazawa-etal-1999-study","url":"https:\/\/aclanthology.org\/1999.mtsummit-1.43","title":"Study on evaluation of WWW MT systems","abstract":"Compared with off-line machine translation (MT). MT for the WWW has more evaluation factors such as translation accuracy of text, interpretation of HTML tags, consistency with various protocols and browsers, and translation speed for net surfing. Moreover, the speed of technical innovation and its practical application is fast, including the appearance of new protocols. Improvement of MT software for the WWW will enable the sharing of information from around the world and make a great deal of con tr ibution to mank ind. D esp ite th e importance of general evaluation studies on MT software for the WWW. it appears that such studies have not yet been conducted. Since MT for the WWW will be a critical factor for future international communication, its study and evaluation is an important theme. This study aims at standardized evaluation of MT for the WWW. and suggests an evaluation method focusing on unique aspects of the WWW independent of text. This evaluation method has a wide range of aptitude without depending on specific languages. Twenty-four items specific to the WWW were actually evaluated with regard to six MT software for the WWW. This study clarified various issues which should be improved in the future regarding MT software for the WWW and issues on evaluation technology of MT on the Internet.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":1999,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} +{"ID":"xiao-etal-2021-end","url":"https:\/\/aclanthology.org\/2021.emnlp-main.280","title":"End-to-End Conversational Search for Online Shopping with Utterance Transfer","abstract":"Successful conversational search systems can present natural, adaptive and interactive shopping experience for online shopping customers. However, building such systems from scratch faces real word challenges from both imperfect product schema\/knowledge and lack of training dialog data. In this work we first propose ConvSearch, an end-to-end conversational search system that deeply combines the dialog system with search. It leverages the text profile to retrieve products, which is more robust against imperfect product schema\/knowledge compared with using product attributes alone. We then address the lack of data challenges by proposing an utterance transfer approach that generates dialogue utterances by using existing dialog from other domains, and leveraging the search behavior data from e-commerce retailer. With utterance transfer, we introduce a new conversational search dataset for online shopping. Experiments show that our utterance transfer method can significantly improve the availability of training dialogue data without crowd-sourcing, and the conversational search system significantly outperformed the best tested baseline.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2021,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} +{"ID":"nakazawa-kurohashi-2009-statistical","url":"https:\/\/aclanthology.org\/W09-2302","title":"Statistical Phrase Alignment Model Using Dependency Relation Probability","abstract":"When aligning very different language pairs, the most important needs are the use of structural information and the capability of generating one-to-many or many-to-many correspondences. In this paper, we propose a novel phrase alignment method which models word or phrase dependency relations in dependency tree structures of source and target languages. The dependency relation model is a kind of tree-based reordering model, and can handle non-local reorderings which sequential word-based models often cannot handle properly. The model is also capable of estimating phrase correspondences automatically without any heuristic rules. Experimental results of alignment show that our model could achieve F-measure 1.7 points higher than the conventional word alignment model with symmetrization algorithms.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2009,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} +{"ID":"wang-etal-2019-vizseq","url":"https:\/\/aclanthology.org\/D19-3043","title":"VizSeq: a visual analysis toolkit for text generation tasks","abstract":"Automatic evaluation of text generation tasks (e.g. machine translation, text summarization, image captioning and video description) usually relies heavily on task-specific metrics, such as BLEU (Papineni et al., 2002) and ROUGE (Lin, 2004). They, however, are abstract numbers and are not perfectly aligned with human assessment. This suggests inspecting detailed examples as a complement to identify system error patterns. In this paper, we present VizSeq, a visual analysis toolkit for instance-level and corpus-level system evaluation on a wide variety of text generation tasks. It supports multimodal sources and multiple text references, providing visualization in Jupyter notebook or a web app interface. It can be used locally or deployed onto public servers for centralized data hosting and benchmarking. It covers most common n-gram based metrics accelerated with multiprocessing, and also provides latest embedding-based metrics such as BERTScore (Zhang et al., 2019).","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"We thank the anonymous reviewers for their comments. We also thank Ann Lee and Pratik Ringshia for helpful discussions on this project.","year":2019,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} +{"ID":"kraft-etal-2016-embedding","url":"https:\/\/aclanthology.org\/D16-1221","title":"An Embedding Model for Predicting Roll-Call Votes","abstract":"We develop a novel embedding-based model for predicting legislative roll-call votes from bill text. The model introduces multidimensional ideal vectors for legislators as an alternative to single dimensional ideal point models for quantitatively analyzing roll-call data. These vectors are learned to correspond with pre-trained word embeddings which allows us to analyze which features in a bill text are most predictive of political support. Our model is quite simple, while at the same time allowing us to successfully predict legislator votes on specific bills with higher accuracy than past methods.","label_nlp4sg":1,"task":[],"method":[],"goal1":"Peace, Justice and Strong Institutions","goal2":null,"goal3":null,"acknowledgments":null,"year":2016,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":1,"sdg17":0} +{"ID":"charbonnier-wartena-2018-using","url":"https:\/\/aclanthology.org\/C18-1221","title":"Using Word Embeddings for Unsupervised Acronym Disambiguation","abstract":"Scientific papers from all disciplines contain many abbreviations and acronyms. In many cases these acronyms are ambiguous. We present a method to choose the contextual correct definition of an acronym that does not require training for each acronym and thus can be applied to a large number of different acronyms with only few instances. We constructed a set of 19,954 examples of 4,365 ambiguous acronyms from image captions in scientific papers along with their contextually correct definition from different domains. We learn word embeddings for all words in the corpus and compare the averaged context vector of the words in the expansion of an acronym with the weighted average vector of the words in the context of the acronym. We show that this method clearly outperforms (classical) cosine similarity. Furthermore, we show that word embeddings learned from a 1 billion word corpus of scientific texts outperform word embeddings learned from much larger general corpora.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2018,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} +{"ID":"ursini-akagi-2011-interpretation","url":"https:\/\/aclanthology.org\/U11-1018","title":"The Interpretation of Plural Pronouns in Discourse: The Case of They","abstract":"This paper presents an experimental study on the interpretation of plural pronoun they in discourse, and offers an answer to two questions. The first question is whether the anaphoric interpretation of they corresponds to that of its antecedent NP(maximal interpretation), or by the \"whole\" previous sentence (reference interpretation). The second question is whether speakers may access only one interpretation or both, although at different \"moments\" in discourse. The answers to these questions suggest that an accurate logical and psychological model of anaphora resolution includes two principles. A first principle finds a \"default\" interpretation, a second principle determines when the \"alternative\" interpretation can (and must) be accessed.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2011,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} +{"ID":"brooke-etal-2017-unsupervised","url":"https:\/\/aclanthology.org\/Q17-1032","title":"Unsupervised Acquisition of Comprehensive Multiword Lexicons using Competition in an n-gram Lattice","abstract":"We present a new model for acquiring comprehensive multiword lexicons from large corpora based on competition among n-gram candidates. In contrast to the standard approach of simple ranking by association measure, in our model n-grams are arranged in a lattice structure based on subsumption and overlap relationships, with nodes inhibiting other nodes in their vicinity when they are selected as a lexical item. We show how the configuration of such a lattice can be optimized tractably, and demonstrate using annotations of sampled n-grams that our method consistently outperforms alternatives by at least 0.05 F-score across several corpora and languages.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"The second author was supported by an Endeavour Research Fellowship from the Australian Government, and in part by the Croatian Science Foundation under project UIP-2014-09-7312. We would also like to thank our English, Japanese, and Croatian annotators, and the TACL reviewers and editors for helping shape this paper into its current form.","year":2017,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} +{"ID":"stathopoulos-teufel-2015-retrieval","url":"https:\/\/aclanthology.org\/P15-2055","title":"Retrieval of Research-level Mathematical Information Needs: A Test Collection and Technical Terminology Experiment","abstract":"In this paper, we present a test collection for mathematical information retrieval composed of real-life, researchlevel mathematical information needs. Topics and relevance judgements have been procured from the on-line collaboration website MathOverflow by delegating domain-specific decisions to experts on-line. With our test collection, we construct a baseline using Lucene's vectorspace model implementation and conduct an experiment to investigate how prior extraction of technical terms from mathematical text can affect retrieval efficiency. We show that by boosting the importance of technical terms, statistically significant improvements in retrieval performance can be obtained over the baseline.","label_nlp4sg":1,"task":[],"method":[],"goal1":"Industry, Innovation and Infrastructure","goal2":null,"goal3":null,"acknowledgments":null,"year":2015,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":1,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} +{"ID":"collins-2002-ranking","url":"https:\/\/aclanthology.org\/P02-1062","title":"Ranking Algorithms for Named Entity Extraction: Boosting and the VotedPerceptron","abstract":"This paper describes algorithms which rerank the top N hypotheses from a maximum-entropy tagger, the application being the recovery of named-entity boundaries in a corpus of web data. The first approach uses a boosting algorithm for ranking problems. The second approach uses the voted perceptron algorithm. Both algorithms give comparable, significant improvements over the maximum-entropy baseline. The voted perceptron algorithm can be considerably more efficient to train, at some cost in computation on test examples.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"Acknowledgements Many thanks to Jack Minisi for annotating the named-entity data used in the exper-iments. Thanks also to Nigel Duffy, Rob Schapire and Yoram Singer for several useful discussions.","year":2002,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} +{"ID":"thadani-mckeown-2011-towards","url":"https:\/\/aclanthology.org\/W11-1606","title":"Towards Strict Sentence Intersection: Decoding and Evaluation Strategies","abstract":"We examine the task of strict sentence intersection: a variant of sentence fusion in which the output must only contain the information present in all input sentences and nothing more. Our proposed approach involves alignment and generalization over the input sentences to produce a generation lattice; we then compare a standard search-based approach for decoding an intersection from this lattice to an integer linear program that preserves aligned content while minimizing the disfluency in interleaving text segments. In addition, we introduce novel evaluation strategies for intersection problems that employ entailmentstyle judgments for determining the validity of system-generated intersections. Our experiments show that the proposed models produce valid intersections a majority of the time and that the segmented decoder yields advantages over the search-based approach.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"The authors are grateful to the anonymous reviewers for their helpful feedback. This material is based on research supported in part by the U.S. National Science Foundation (NSF) under IIS-05-34871. Any opinions, findings and conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect the views of the NSF.","year":2011,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} +{"ID":"zhao-etal-2005-bilingual","url":"https:\/\/aclanthology.org\/W05-0804","title":"Bilingual Word Spectral Clustering for Statistical Machine Translation","abstract":"In this paper, a variant of a spectral clustering algorithm is proposed for bilingual word clustering. The proposed algorithm generates the two sets of clusters for both languages efficiently with high semantic correlation within monolingual clusters, and high translation quality across the clusters between two languages. Each cluster level translation is considered as a bilingual concept, which generalizes words in bilingual clusters. This scheme improves the robustness for statistical machine translation models. Two HMMbased translation models are tested to use these bilingual clusters. Improved perplexity, word alignment accuracy, and translation quality are observed in our experiments.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2005,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} +{"ID":"lietard-etal-2021-language","url":"https:\/\/aclanthology.org\/2021.blackboxnlp-1.40","title":"Do Language Models Know the Way to Rome?","abstract":"The global geometry of language models is important for a range of applications, but language model probes tend to evaluate rather local relations, for which ground truths are easily obtained. In this paper we exploit the fact that in geography, ground truths are available beyond local relations. In a series of experiments, we evaluate the extent to which language model representations of city and country names are isomorphic to real-world geography, e.g., if you tell a language model where Paris and Berlin are, does it know the way to Rome? We find that language models generally encode limited geographic information, but with larger models performing the best, suggesting that geographic knowledge can be induced from higher-order cooccurrence statistics.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"We thank the anonymous reviewers. Mostafa Abdou was funded by a Google Focused Research Award. We used data created by MaxMind, available from http:\/\/www.maxmind.com\/.","year":2021,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} +{"ID":"dcosta-etal-2020-multiple","url":"https:\/\/aclanthology.org\/2020.clinicalnlp-1.2","title":"Multiple Sclerosis Severity Classification From Clinical Text","abstract":"Multiple Sclerosis (MS) is a chronic, inflammatory and degenerative neurological disease, which is monitored by a specialist using the Expanded Disability Status Scale (EDSS) and recorded in unstructured text in the form of a neurology consult note. An EDSS measurement contains an overall 'EDSS' score and several functional subscores. Typically, expert knowledge is required to interpret consult notes and generate these scores. Previous approaches used limited context length Word2Vec embeddings and keyword searches to predict scores given a consult note, but often failed when scores were not explicitly stated. In this work, we present MS-BERT, the first publicly available transformer model trained on real clinical data other than MIMIC. Next, we present MSBC, a classifier that applies MS-BERT to generate embeddings and predict EDSS and functional subscores. Lastly, we explore combining MSBC with other models through the use of Snorkel to generate scores for unlabelled consult notes. MSBC achieves state-of-the-art performance on all metrics and prediction tasks and outperforms the models generated from the Snorkel ensemble. We improve Macro-F1 by 0.12 (to 0.88) for predicting EDSS and on average by 0.29 (to 0.63) for predicting functional subscores over previous Word2Vec CNN and rule-based approaches.","label_nlp4sg":1,"task":[],"method":[],"goal1":"Good Health and Well-Being","goal2":null,"goal3":null,"acknowledgments":"We would like to thank the researchers and staff at the Data Science and Advanced Analytics (DSAA) team at St. Michael's Hospital, for providing consistent support and guidance throughout this project. We would also like to thank Dr. Marzyeh Ghassemi, and Taylor Killan for providing us the opportunity to work on this exciting project. Lastly, we would like to thank Dr. Tony Antoniou and Dr. Jiwon Oh from the MS clinic at St. Michael's Hospital for their support on the neurological examination notes.","year":2020,"sdg1":0,"sdg2":0,"sdg3":1,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} +{"ID":"mishra-etal-2019-modular","url":"https:\/\/aclanthology.org\/D19-1636","title":"A Modular Architecture for Unsupervised Sarcasm Generation","abstract":"In this paper, we propose a novel framework for sarcasm generation; the system takes a literal negative opinion as input and translates it into a sarcastic version. Our framework does not require any paired data for training. Sarcasm emanates from context-incongruity which becomes apparent as the sentence unfolds. Our framework introduces incongruity into the literal input version through modules that: (a) filter factual content from the input opinion, (b) retrieve incongruous phrases related to the filtered facts and (c) synthesize sarcastic text from the filtered and incongruous phrases. The framework employs reinforced neural sequence to sequence learning and information retrieval and is trained only using unlabeled non-sarcastic and sarcastic opinions. Since no labeled dataset exists for such a task, for evaluation, we manually prepare a benchmark dataset containing literal opinions and their sarcastic paraphrases. Qualitative and quantitative performance analyses on the data reveal our system's superiority over baselines, built using known unsupervised statistical and neural machine translation and style transfer techniques.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2019,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} +{"ID":"cao-etal-2020-unsupervised-dual","url":"https:\/\/aclanthology.org\/2020.acl-main.608","title":"Unsupervised Dual Paraphrasing for Two-stage Semantic Parsing","abstract":"One daunting problem for semantic parsing is the scarcity of annotation. Aiming to reduce nontrivial human labor, we propose a two-stage semantic parsing framework, where the first stage utilizes an unsupervised paraphrase model to convert an unlabeled natural language utterance into the canonical utterance. The downstream naive semantic parser accepts the intermediate output and returns the target logical form. Furthermore, the entire training process is split into two phases: pre-training and cycle learning. Three tailored self-supervised tasks are introduced throughout training to activate the unsupervised paraphrase model. Experimental results on benchmarks OVERNIGHT and GE-OGRANNO demonstrate that our framework is effective and compatible with supervised training.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"We thank the anonymous reviewers for their thoughtful comments.This work has been supported by the National Key Research and Development Program of China (Grant No. 2017YFB1002102) and Shanghai Jiao Tong University Scientific and Technological Innovation Funds (YG2020YQ01).","year":2020,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} +{"ID":"zhang-etal-2016-transition-based","url":"https:\/\/aclanthology.org\/P16-1040","title":"Transition-Based Neural Word Segmentation","abstract":"Character-based and word-based methods are two main types of statistical models for Chinese word segmentation, the former exploiting sequence labeling models over characters and the latter typically exploiting a transition-based model, with the advantages that word-level features can be easily utilized. Neural models have been exploited for character-based Chinese word segmentation, giving high accuracies by making use of external character embeddings, yet requiring less feature engineering. In this paper, we study a neural model for word-based Chinese word segmentation, by replacing the manuallydesigned discrete features with neural features in a word-based segmentation framework. Experimental results demonstrate that word features lead to comparable performances to the best systems in the literature, and a further combination of discrete and neural features gives top accuracies.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"We thank the anonymous reviewers, Yijia Liu and Hai Zhao for their constructive comments, which help to improve the final paper. This work is supported by National Natural Science Foundation of China (NSFC) under grant 61170148, Natural Science Foundation of Heilongjiang Province (China) under grant No.F2016036, the Singapore Ministry of Education (MOE) AcRF Tier 2 grant T2MOE201301 and SRG ISTD 2012 038 from Singapore University of Technology and Design. Yue Zhang is the corresponding author.","year":2016,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} +{"ID":"nagao-1995-future","url":"https:\/\/aclanthology.org\/1995.mtsummit-1.33","title":"What have we to do for the future of MT systems?","abstract":"translations because delicate translations are difficult by grammatical rules. 2. Choice of words and phrases in utterances is strongly influenced by such factors as the relation between the speaker and hearer, context, situation, cultural background and so on. All these factors must be listed up and their functions are to be clarified. 3. We have to go from syntax directed MT to semantic\/context dependent MT. Anaphora, ellipsis, topic\/focus, old\/new information problems should be studied. 4 Completely new MT algorithms must be developed by utilizing the factors mentioned above. 5. MT softwares must be available on PCs and word processors. Those people who use MT systems must be able to exchange their experiences and know-hows through computer network conversations.\nOpen forum on MT must be established on a computer network where everybody can make contribution of any kind.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":1995,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} +{"ID":"chalapathy-etal-2016-investigation","url":"https:\/\/aclanthology.org\/W16-6101","title":"An Investigation of Recurrent Neural Architectures for Drug Name Recognition","abstract":"Drug name recognition (DNR) is an essential step in the Pharmacovigilance (PV) pipeline. DNR aims to find drug name mentions in unstructured biomedical texts and classify them into predefined categories. State-of-the-art DNR approaches heavily rely on hand-crafted features and domain-specific resources which are difficult to collect and tune. For this reason, this paper investigates the effectiveness of contemporary recurrent neural architecturesthe Elman and Jordan networks and the bidirectional LSTM with CRF decoding-at performing DNR straight from the text. The experimental results achieved on the authoritative SemEval-2013 Task 9.1 benchmarks show that the bidirectional LSTM-CRF ranks closely to highly-dedicated, hand-crafted systems.","label_nlp4sg":1,"task":[],"method":[],"goal1":"Good Health and Well-Being","goal2":null,"goal3":null,"acknowledgments":null,"year":2016,"sdg1":0,"sdg2":0,"sdg3":1,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} +{"ID":"garcia-diaz-etal-2022-umuteam","url":"https:\/\/aclanthology.org\/2022.dravidianlangtech-1.6","title":"UMUTeam@TamilNLP-ACL2022: Emotional Analysis in Tamil","abstract":"This working notes summarises the participation of the UMUTeam on the TamilNLP (ACL 2022) shared task concerning emotion analysis in Tamil. We participated in the two multiclassification challenges proposed with a neural network that combines linguistic features with different feature sets based on contextual and non-contextual sentence embeddings. Our proposal achieved the 1st result for the second subtask, with an f1-score of 15.1% discerning among 30 different emotions. However, our results for the first subtask were not recorded in the official leader board. Accordingly, we report our results for this subtask with the validation split, reaching a macro f1-score of 32.360%.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"This work is part of the research project LaTe4PSP (PID2019-107652RB-I00) funded by MCIN\/ AEI\/10.13039\/501100011033. This work is also part of the research project PDC2021-121112-I00 funded by MCIN\/AEI\/10.13039\/501100011033 and by the European Union NextGenera-tionEU\/PRTR. In addition, Jos\u00e9 Antonio Garc\u00eda-D\u00edaz is supported by Banco Santander and the University of Murcia through the Doctorado Industrial programme.","year":2022,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} +{"ID":"chernyshevich-2014-ihs","url":"https:\/\/aclanthology.org\/S14-2051","title":"IHS R\\&D Belarus: Cross-domain extraction of product features using CRF","abstract":"This paper describes the aspect extraction system submitted by IHS R&D Belarus team at the SemEval-2014 shared task related to Aspect-Based Sentiment Analysis. Our system is based on IHS Goldfire linguistic processor and uses a rich set of lexical, syntactic and statistical features in CRF model. We participated in two domain-specific tasksrestaurants and laptopswith the same system trained on a mixed corpus of reviews. Among submissions of constrained systems from 28 teams, our submission was ranked first in laptop domain and fourth in restaurant domain for the subtask A devoted to aspect extraction.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2014,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} +{"ID":"liu-etal-2018-narrative","url":"https:\/\/aclanthology.org\/P18-2045","title":"Narrative Modeling with Memory Chains and Semantic Supervision","abstract":"Story comprehension requires a deep semantic understanding of the narrative, making it a challenging task. Inspired by previous studies on ROC Story Cloze Test, we propose a novel method, tracking various semantic aspects with external neural memory chains while encouraging each to focus on a particular semantic aspect. Evaluated on the task of story ending prediction, our model demonstrates superior performance to a collection of competitive baselines, setting a new state of the art. 1 1 Code available at http:\/\/github.com\/liufly\/ narrative-modeling. Context: Sam loved his old belt. He matched it with everything. Unfortunately he gained too much weight. It became too small. Coherent Ending: Sam went on a diet. Incoherent Ending: Sam was happy.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"We thank the anonymous reviewers for their valuable feedback, and gratefully acknowledge the support of Australian Government Research Training Program Scholarship. This work was also supported in part by the Australian Research Council.","year":2018,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} +{"ID":"moeller-etal-2021-pos","url":"https:\/\/aclanthology.org\/2021.acl-long.78","title":"To POS Tag or Not to POS Tag: The Impact of POS Tags on Morphological Learning in Low-Resource Settings","abstract":"Part-of-Speech (POS) tags routinely appear as features in morphological tasks. POS taggers are often one of the first NLP tools developed for low-resource languages. However, as NLP expands to new languages it cannot assume that POS tags will be available to train a POS tagger. This paper empirically examines the impact of POS tags on two morphological tasks with the Transformer architecture. Each task is run twice, once with and once without POS tags, on otherwise identical data from ten well-described languages and five underdocumented languages. We find that the presence or absence of POS tags does not have a significant bearing on the performance of either task. In joint segmentation and glossing, the largest average difference is an .09 improvement in F 1-scores by removing POS tags. In reinflection, the greatest average difference is 1.2% in accuracy for published data and 5% for unpublished data. These results are indicators that NLP and documentary linguistics may benefit each other even when a POS tag set does not yet exist for a language.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2021,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} +{"ID":"navigli-2006-meaningful","url":"https:\/\/aclanthology.org\/P06-1014","title":"Meaningful Clustering of Senses Helps Boost Word Sense Disambiguation Performance","abstract":"Fine-grained sense distinctions are one of the major obstacles to successful Word Sense Disambiguation. In this paper, we present a method for reducing the granularity of the WordNet sense inventory based on the mapping to a manually crafted dictionary encoding sense hierarchies, namely the Oxford Dictionary of English. We assess the quality of the mapping and the induced clustering, and evaluate the performance of coarse WSD systems in the Senseval-3 English all-words task.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"This work is partially funded by the Interop NoE (508011), 6 th European Union FP. We wish to thank Paola Velardi, Mirella Lapata and Samuel Brody for their useful comments.","year":2006,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} +{"ID":"alt-etal-2019-fine","url":"https:\/\/aclanthology.org\/P19-1134","title":"Fine-tuning Pre-Trained Transformer Language Models to Distantly Supervised Relation Extraction","abstract":"Distantly supervised relation extraction is widely used to extract relational facts from text, but suffers from noisy labels. Current relation extraction methods try to alleviate the noise by multi-instance learning and by providing supporting linguistic and contextual information to more efficiently guide the relation classification. While achieving state-of-the-art results, we observed these models to be biased towards recognizing a limited set of relations with high precision, while ignoring those in the long tail. To address this gap, we utilize a pre-trained language model, the OpenAI Generative Pre-trained Transformer (GPT) (Radford et al., 2018). The GPT and similar models have been shown to capture semantic and syntactic features, and also a notable amount of \"common-sense\" knowledge, which we hypothesize are important features for recognizing a more diverse set of relations. By extending the GPT to the distantly supervised setting, and fine-tuning it on the NYT10 dataset, we show that it predicts a larger set of distinct relation types with high confidence. Manual and automated evaluation of our model shows that it achieves a state-of-the-art AUC score of 0.422 on the NYT10 dataset, and performs especially well at higher recall levels.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"We would like to thank the anonymous reviewers for their comments. This research was partially supported by the German Federal Ministry of Education and Research through the projects DEEPLEE (01IW17001) and BBDC2 (01IS18025E), and by the German Federal Ministry of Transport and Digital Infrastructure through the project DAYSTREAM (19F2031A).","year":2019,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} +{"ID":"fung-etal-2003-combining","url":"https:\/\/aclanthology.org\/W03-1203","title":"Combining Optimal Clustering and Hidden Markov Models for Extractive Summarization","abstract":"We propose Hidden Markov models with unsupervised training for extractive summarization. Extractive summarization selects salient sentences from documents to be included in a summary. Unsupervised clustering combined with heuristics is a popular approach because no annotated data is required. However, conventional clustering methods such as K-means do not take text cohesion into consideration. Probabilistic methods are more rigorous and robust, but they usually require supervised training with annotated data. Our method incorporates unsupervised training with clustering, into a probabilistic framework. Clustering is done by modified K-means (MKM)-a method that yields more optimal clusters than the conventional K-means method. Text cohesion is modeled by the transition probabilities of an HMM, and term distribution is modeled by the emission probabilities. The final decoding process tags sentences in a text with theme class labels. Parameter training is carried out by the segmental K-means (SKM) algorithm. The output of our system can be used to extract salient sentences for summaries, or used for topic detection. Content-based evaluation shows that our method outperforms an existing extractive summarizer by 22.8% in terms of relative similarity, and outperforms a baseline summarizer that selects the top N sentences as salient sentences by 46.3%.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2003,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} +{"ID":"kann-schutze-2018-neural","url":"https:\/\/aclanthology.org\/D18-1363","title":"Neural Transductive Learning and Beyond: Morphological Generation in the Minimal-Resource Setting","abstract":"Neural state-of-the-art sequence-to-sequence (seq2seq) models often do not perform well for small training sets. We address paradigm completion, the morphological task of, given a partial paradigm, generating all missing forms. We propose two new methods for the minimalresource setting: (i) Paradigm transduction: Since we assume only few paradigms available for training, neural seq2seq models are able to capture relationships between paradigm cells, but are tied to the idiosyncracies of the training set. Paradigm transduction mitigates this problem by exploiting the input subset of inflected forms at test time. (ii) Source selection with high precision (SHIP): Multi-source models which learn to automatically select one or multiple sources to predict a target inflection do not perform well in the minimal-resource setting. SHIP is an alternative to identify a reliable source if training data is limited. On a 52-language benchmark dataset, we outperform the previous state of the art by up to 9.71% absolute accuracy.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"We would like to thank Samuel Bowman, Ryan Cotterell, Nikita Nangia, and Alex Warstadt for their feedback on this work.","year":2018,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} +{"ID":"johnson-1997-personal","url":"https:\/\/aclanthology.org\/1997.tc-1.4","title":"Personal Translation Applications","abstract":null,"label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":1997,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} +{"ID":"kanakarajan-etal-2019-saama","url":"https:\/\/aclanthology.org\/W19-5055","title":"Saama Research at MEDIQA 2019: Pre-trained BioBERT with Attention Visualisation for Medical Natural Language Inference","abstract":"Natural Language inference is the task of identifying relation between two sentences as entailment, contradiction or neutrality. MedNLI is a biomedical flavour of NLI for clinical domain. This paper explores the use of Bidirectional Encoder Representation from Transformer (BERT) for solving MedNLI. The proposed model, BERT pre-trained on PMC, PubMed and fine-tuned on MIMIC-III v1.4, achieves state of the art results on MedNLI (83.45%) and an accuracy of 78.5% in MEDIQA challenge. The authors present an analysis of the attention patterns that emerged as a result of training BERT on MedNLI using a visualization tool, bertviz. * *Equal Contribution: Kamal had sole access to MIMIC and MEDIQA data, focussed on the algorithm development and implementation. Suriyadeepan and Archana focussed on the attention visualisation and writing. Soham and Malaikannan focussed on reviewing","label_nlp4sg":1,"task":[],"method":[],"goal1":"Good Health and Well-Being","goal2":null,"goal3":null,"acknowledgments":"The authors would like to thank Bhuvana Kundumani for reviewing the manuscript and for providing her technical inputs. The authors would also like to extend their gratitude to Saama Technologies Inc. for providing the perfect research and innovation environment.","year":2019,"sdg1":0,"sdg2":0,"sdg3":1,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} +{"ID":"lison-etal-2018-opensubtitles2018","url":"https:\/\/aclanthology.org\/L18-1275","title":"OpenSubtitles2018: Statistical Rescoring of Sentence Alignments in Large, Noisy Parallel Corpora","abstract":"Movie and TV subtitles are a highly valuable resource for the compilation of parallel corpora thanks to their availability in large numbers and across many languages. However, the quality of the resulting sentence alignments is often lower than for other parallel corpora. This paper presents a new major release of the OpenSubtitles collection of parallel corpora, which is extracted from a total of 3.7 million subtitles spread over 60 languages. In addition to a substantial increase in the corpus size (about 30 % compared to the previous version), this new release associates explicit quality scores to each sentence alignment. These scores are determined by a feedforward neural network based on simple language-independent features and estimated on a sample of aligned sentence pairs. Evaluation results show that the model is able predict lexical translation probabilities with a root mean square error of 0.07 (coefficient of determination R 2 = 0.47). Based on the scores produced by this regression model, the parallel corpora can be filtered to prune out low-quality alignments.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2018,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} +{"ID":"paetzel-etal-2014-multimodal","url":"http:\/\/www.lrec-conf.org\/proceedings\/lrec2014\/pdf\/697_Paper.pdf","title":"A Multimodal Corpus of Rapid Dialogue Games","abstract":"This paper presents a multimodal corpus of spoken human-human dialogues collected as participants played a series of Rapid Dialogue Games (RDGs). The corpus consists of a collection of about 11 hours of spoken audio, video, and Microsoft Kinect data taken from 384 game interactions (dialogues). The games used for collecting the corpus required participants to give verbal descriptions of linguistic expressions or visual images and were specifically designed to engage players in a fast-paced conversation under time pressure. As a result, the corpus contains many examples of participants attempting to communicate quickly in specific game situations, and it also includes a variety of spontaneous conversational phenomena such as hesitations, filled pauses, overlapping speech, and low-latency responses. The corpus has been created to facilitate research in incremental speech processing for spoken dialogue systems. Potentially, the corpus could be used in several areas of speech and language research, including speech recognition, natural language understanding, natural language generation, and dialogue management.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2014,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} +{"ID":"htait-etal-2017-lsis","url":"https:\/\/aclanthology.org\/S17-2120","title":"LSIS at SemEval-2017 Task 4: Using Adapted Sentiment Similarity Seed Words For English and Arabic Tweet Polarity Classification","abstract":"We present, in this paper, our contribution in SemEval2017 task 4 : \"Sentiment Analysis in Twitter\", subtask A: \"Message Polarity Classification\", for English and Arabic languages. Our system is based on a list of sentiment seed words adapted for tweets. The sentiment relations between seed words and other terms are captured by cosine similarity between the word embedding representations (word2vec). These seed words are extracted from datasets of annotated tweets available online. Our tests, using these seed words, show significant improvement in results compared to the use of Turney and Littman's (2003) seed words, on polarity classification of tweet messages.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"This work was supported by the French program Investissements d'Avenir Equipex \"A digital library for open humanities\" of OpenEdition.org.","year":2017,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} +{"ID":"ahmadi-2020-building","url":"https:\/\/aclanthology.org\/2020.vardial-1.7","title":"Building a Corpus for the Zaza--Gorani Language Family","abstract":"Thanks to the growth of local communities and various news websites along with the increasing accessibility of the Web, some of the endangered and less-resourced languages have a chance to revive in the information era. Therefore, the Web is considered a huge resource that can be used to extract language corpora which enable researchers to carry out various studies in linguistics and language technology. The Zaza-Gorani language family is a linguistic subgroup of the Northwestern Iranian languages for which there is no significant corpus available. Motivated to create one, in this paper we present our endeavour to collect a corpus in Zazaki and Gorani languages containing over 1.6M and 194k word tokens, respectively. This corpus is publicly available 1 .","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"The author would like to thank the constructive comments of Dr. Ilyas Arslan and Mesut Keskin regarding Zazaki and the invaluable insights of Dr. Parvin Mahmoudveysi regarding Gorani. Likewise, the comments of the anynomous reviewers are very much appreciated.","year":2020,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} +{"ID":"wang-etal-2007-kernel","url":"https:\/\/aclanthology.org\/N07-2047","title":"Kernel Regression Based Machine Translation","abstract":"We present a novel machine translation framework based on kernel regression techniques. In our model, the translation task is viewed as a string-to-string mapping, for which a regression type learning is employed with both the source and the target sentences embedded into their kernel induced feature spaces. We report the experiments on a French-English translation task showing encouraging results.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"The authors acknowledge the support of the EU under the IST project No. FP6-033917.","year":2007,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} +{"ID":"karamanolakis-etal-2021-self","url":"https:\/\/aclanthology.org\/2021.naacl-main.66","title":"Self-Training with Weak Supervision","abstract":"State-of-the-art deep neural networks require large-scale labeled training data that is often expensive to obtain or not available for many tasks. Weak supervision in the form of domainspecific rules has been shown to be useful in such settings to automatically generate weakly labeled training data. However, learning with weak rules is challenging due to their inherent heuristic and noisy nature. An additional challenge is rule coverage and overlap, where prior work on weak supervision only considers instances that are covered by weak rules, thus leaving valuable unlabeled data behind. In this work, we develop a weak supervision framework (ASTRA 1) that leverages all the available data for a given task. To this end, we leverage task-specific unlabeled data through self-training with a model (student) that considers contextualized representations and predicts pseudo-labels for instances that may not be covered by weak rules. We further develop a rule attention network (teacher) that learns how to aggregate student pseudo-labels with weak rule labels, conditioned on their fidelity and the underlying context of an instance. Finally, we construct a semi-supervised learning objective for end-to-end training with unlabeled data, domain-specific rules, and a small amount of labeled data. Extensive experiments on six benchmark datasets for text classification demonstrate the effectiveness of our approach with significant improvements over state-of-the-art baselines.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"We thank the anonymous reviewers for their constructive feedback, and Wei Wang and Benjamin Van Durme for insightful discussions.","year":2021,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} +{"ID":"hovy-2002-building","url":"https:\/\/aclanthology.org\/W02-1105","title":"Building Semantic\/Ontological Knowledge by Text Mining","abstract":null,"label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2002,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} +{"ID":"hongxu-etal-2004-ebmt","url":"https:\/\/aclanthology.org\/2004.iwslt-evaluation.7","title":"An EBMT system based on word alignment","abstract":"This system is an experiment of examples based approach. It is based on a corpus containing 220 thousand sentence pairs with word alignment. The system contains four parts: matching and search, fragment matching, fragment assembling, evaluation and post processing. We use word alignment information to find and combine fragments.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2004,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} +{"ID":"beloucif-etal-2016-improving","url":"https:\/\/aclanthology.org\/W16-4507","title":"Improving word alignment for low resource languages using English monolingual SRL","abstract":"We introduce a new statistical machine translation approach specifically geared to learning translation from low resource languages, that exploits monolingual English semantic parsing to bias inversion transduction grammar (ITG) induction. We show that in contrast to conventional statistical machine translation (SMT) training methods, which rely heavily on phrase memorization, our approach focuses on learning bilingual correlations that help translating low resource languages, by using the output language semantic structure to further narrow down ITG constraints. This approach is motivated by previous research which has shown that injecting a semantic frame based objective function while training SMT models improves the translation quality. We show that including a monolingual semantic objective function during the learning of the translation model leads towards a semantically driven alignment which is more efficient than simply tuning loglinear mixture weights against a semantic frame based evaluation metric in the final stage of statistical machine translation training. We test our approach with three different language pairs and demonstrate that our model biases the learning towards more semantically correct alignments. Both GIZA++ and ITG based techniques fail to capture meaningful bilingual constituents, which is required when trying to learn translation models for low resource languages. In contrast, our proposed model not only improve translation by injecting a monolingual objective function to learn bilingual correlations during early training of the translation model, but also helps to learn more meaningful correlations with a relatively small data set, leading to a better alignment compared to either conventional ITG or traditional GIZA++ based approaches.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2016,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} +{"ID":"byrd-chodorow-1985-using","url":"https:\/\/aclanthology.org\/P85-1034","title":"Using an On-Line Dictionary to Find Rhyming Words and Pronunciations for Unknown Words","abstract":"Humans know a great deal about relationships among words. This paper discusses relationships among word pronunciations. We describe a computer system which models human judgement of rhyme by assigning specific roles to the location of primary stress, the similarity of phonetic segments, and other factors. By using the model as an experimental tool, we expect to improve our understanding of rhyme. A related computer model will attempt to generate pronunciations for unknown words by analogy with those for known words. The analogical processes involve techniques for segmenting and matching word spellings, and for mapping spelling to sound in known words. As in the case of rhyme, the computer model will be an important tool for improving our understanding of these processes. Both models serve as the basis for functions in the WordSmith automated dictionary system.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"We are grateful to Barbara Kipfer for her preliminary work on the syllabification of unknown words, and to Yael Ravin and Mary Neff for comments on earlier versions of this report.","year":1985,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} +{"ID":"macwhinney-fromm-2014-two","url":"http:\/\/www.lrec-conf.org\/proceedings\/lrec2014\/pdf\/419_Paper.pdf","title":"Two Approaches to Metaphor Detection","abstract":"Methods for automatic detection and interpretation of metaphors have focused on analysis and utilization of the ways in which metaphors violate selectional preferences (Martin, 2006). Detection and interpretation processes that rely on this method can achieve wide coverage and may be able to detect some novel metaphors. However, they are prone to high false alarm rates, often arising from imprecision in parsing and supporting ontological and lexical resources. An alternative approach to metaphor detection emphasizes the fact that many metaphors become conventionalized collocations, while still preserving their active metaphorical status. Given a large enough corpus for a given language, it is possible to use tools like SketchEngine (Kilgariff, Rychly, Smrz, & Tugwell, 2004) to locate these high frequency metaphors for a given target domain. In this paper, we examine the application of these two approaches and discuss their relative strengths and weaknesses for metaphors in the target domain of economic inequality in English, Spanish, Farsi, and Russian.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"9.","year":2014,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} +{"ID":"chen-guo-2015-representation","url":"https:\/\/aclanthology.org\/P15-2025","title":"Representation Based Translation Evaluation Metrics","abstract":"Precisely evaluating the quality of a translation against human references is a challenging task due to the flexible word ordering of a sentence and the existence of a large number of synonyms for words. This paper proposes to evaluate translations with distributed representations of words and sentences. We study several metrics based on word and sentence representations and their combination. Experiments on the WMT metric task shows that the metric based on the combined representations achieves the best performance, outperforming the state-of-the-art translation metrics by a large margin. In particular, training the distributed representations only needs a reasonable amount of monolingual, unlabeled data that is not necessary drawn from the test domain.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"The authors would like to thank Colin Cherry and Roland Kuhn for useful discussions.","year":2015,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} +{"ID":"goutte-etal-2012-impact","url":"https:\/\/aclanthology.org\/2012.amta-papers.7","title":"The Impact of Sentence Alignment Errors on Phrase-Based Machine Translation Performance","abstract":"When parallel or comparable corpora are harvested from the web, there is typically a tradeoff between the size and quality of the data. In order to improve quality, corpus collection efforts often attempt to fix or remove misaligned sentence pairs. But, at the same time, Statistical Machine Translation (SMT) systems are widely assumed to be relatively robust to sentence alignment errors. However, there is little empirical evidence to support and characterize this robustness. This contribution investigates the impact of sentence alignment errors on a typical phrase-based SMT system. We confirm that SMT systems are highly tolerant to noise, and that performance only degrades seriously at very high noise levels. Our findings suggest that when collecting larger, noisy parallel data for training phrase-based SMT, cleaning up by trying to detect and remove incorrect alignments can actually degrade performance. Although fixing errors, when applicable, is a preferable strategy to removal, its benefits only become apparent for fairly high misalignment rates. We provide several explanations to support these findings.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2012,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} +{"ID":"jing-mckeown-2000-cut","url":"https:\/\/aclanthology.org\/A00-2024","title":"Cut and Paste Based Text Summarization","abstract":"We present a cut and paste based text summarizer, which uses operations derived from an analysis of human written abstracts. The summarizer edits extracted sentences, using reduction to remove inessential phrases and combination to merge resuiting phrases together as coherent sentences. Our work includes a statistically based sentence decomposition program that identifies where the phrases of a summary originate in the original document, producing an aligned corpus of summaries and articles which we used to develop the summarizer.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"We thank IBM for licensing us the ESG parser and the MITRE corporation for licensing us the coreference resolution system. This material is based upon work supported by the National Science Foundation under Grant No. IRI 96-19124 and IRI 96-18797. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect the views of the National Science Foundation.","year":2000,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} +{"ID":"jo-choi-2018-extrofitting","url":"https:\/\/aclanthology.org\/W18-3003","title":"Extrofitting: Enriching Word Representation and its Vector Space with Semantic Lexicons","abstract":"We propose post-processing method for enriching not only word representation but also its vector space using semantic lexicons, which we call extrofitting. The method consists of 3 steps as follows: (i) Expanding 1 or more dimension(s) on all the word vectors, filling with their representative value. (ii) Transferring semantic knowledge by averaging each representative values of synonyms and filling them in the expanded dimension(s). These two steps make representations of the synonyms close together. (iii) Projecting the vector space using Linear Discriminant Analysis, which eliminates the expanded dimension(s) with semantic knowledge. When experimenting with GloVe, we find that our method outperforms Faruqui's retrofitting on some of word similarity task. We also report further analysis on our method in respect to word vector dimensions, vocabulary size as well as other well-known pretrained word vectors (e.g., Word2Vec, Fasttext).","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"Thanks for Jaeyoung Kim to discuss this idea. Also, greatly appreciate the reviewers for critical comments.","year":2018,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} +{"ID":"velldal-etal-2017-joint","url":"https:\/\/aclanthology.org\/W17-0201","title":"Joint UD Parsing of Norwegian Bokm\\aal and Nynorsk","abstract":"This paper investigates interactions in parser performance for the two official standards for written Norwegian: Bokm\u00e5l and Nynorsk. We demonstrate that while applying models across standards yields poor performance, combining the training data for both standards yields better results than previously achieved for each of them in isolation. This has immediate practical value for processing Norwegian, as it means that a single parsing pipeline is sufficient to cover both varieties, with no loss in accuracy. Based on the Norwegian Universal Dependencies treebank we present results for multiple taggers and parsers, experimenting with different ways of varying the training data given to the learners, including the use of machine translation.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2017,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} +{"ID":"guo-etal-2020-cyclegt","url":"https:\/\/aclanthology.org\/2020.webnlg-1.8","title":"CycleGT: Unsupervised Graph-to-Text and Text-to-Graph Generation via Cycle Training","abstract":"Two important tasks at the intersection of knowledge graphs and natural language processing are graph-to-text (G2T) and text-tograph (T2G) conversion. Due to the difficulty and high cost of data collection, the supervised data available in the two fields are usually on the magnitude of tens of thousands, for example, 18K in the WebNLG 2017 dataset after preprocessing, which is far fewer than the millions of data for other tasks such as machine translation. Consequently, deep learning models for G2T and T2G suffer largely from scarce training data. We present CycleGT, an unsupervised training method that can bootstrap from fully non-parallel graph and text data, and iteratively back translate between the two forms. Experiments on WebNLG datasets show that our unsupervised model trained on the same number of data achieves performance on par with several fully supervised models. Further experiments on the non-parallel Gen-Wiki dataset verify that our method performs the best among unsupervised baselines. This validates our framework as an effective approach to overcome the data scarcity problem in the fields of G2T and T2G. 1 * Equal contribution. \u2020 Work done during internship at Amazon Shanghai AI Lab.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"We thank colleagues at the Amazon Shanghai AI lab, including Xiangkun Hu, Hang Yan, and many others for insightful discussions that constructively helped this work.","year":2020,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} +{"ID":"schiehlen-2004-annotation","url":"https:\/\/aclanthology.org\/C04-1056","title":"Annotation Strategies for Probabilistic Parsing in German","abstract":null,"label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2004,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} +{"ID":"belz-etal-2022-quantified","url":"https:\/\/aclanthology.org\/2022.acl-long.2","title":"Quantified Reproducibility Assessment of NLP Results","abstract":"This paper describes and tests a method for carrying out quantified reproducibility assessment (QRA) that is based on concepts and definitions from metrology. QRA produces a single score estimating the degree of reproducibility of a given system and evaluation measure, on the basis of the scores from, and differences between, different reproductions. We test QRA on 18 system and evaluation measure combinations (involving diverse NLP tasks and types of evaluation), for each of which we have the original results and one to seven reproduction results. The proposed QRA method produces degree-of-reproducibility scores that are comparable across multiple reproductions not only of the same, but of different original studies. We find that the proposed method facilitates insights into causes of variation between reproductions, and allows conclusions to be drawn about what changes to system and\/or evaluation design might lead to improved reproducibility.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"We are grateful to the anonymous reviewers and area chairs for their exceptionally detailed and helpful feedback.Popovi\u0107's work on this s study was funded by the ADAPT SFI Centre for Digital Media Technology which is funded by Science Foundation Ireland through the SFI Research Centres Programme, and co-funded under the European Regional Development Fund (ERDF) through Grant 13\/RC\/2106. Mille's work was supported by the European Commission under the H2020 program contract numbers 786731, 825079, 870930 and 952133.","year":2022,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} +{"ID":"zelenko-etal-2002-kernel","url":"https:\/\/aclanthology.org\/W02-1010","title":"Kernel Methods for Relation Extraction","abstract":null,"label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2002,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} +{"ID":"palomar-etal-2001-algorithm","url":"https:\/\/aclanthology.org\/J01-4005","title":"An Algorithm for Anaphora Resolution in Spanish Texts","abstract":"This paper presents an algorithm for identifying noun phrase antecedents of third person personal pronouns, demonstrative pronouns, reflexive pronouns, and omitted pronouns (zero pronouns) in unrestricted Spanish texts. We define a list of constraints and preferences for different types of pronominal expressions, and we document in detail the importance of each kind of knowledge (lexical, morphological, syntactic, and statistical) in anaphora resolution for Spanish. The paper also provides a definition for syntactic conditions on Spanish NP-pronoun noncoreference using partial parsing. The algorithm has been evaluated on a corpus of 1,677 pronouns and achieved a success rate of 76.8%. We have also implemented four competitive algorithms and tested their performance in a blind evaluation on the same test corpus. This new approach could easily be extended to other languages such as English, Portuguese, Italian, or Japanese.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"The authors wish to thank Ferran Pla, Natividad Prieto, and Antonio Molina for contributing their tagger (Pla 2000) ; and Richard Evans, Mikel Forcada, and Rafael Carrasco for their helpful revisions of the ideas presented in this paper. We are also grateful to several anonymous reviewers of Computational Linguistics for helpful comments on earlier drafts of this paper. Our work has been supported by the Spanish government (CICYT) with Grant TIC97-0671-C02-01\/02.","year":2001,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} +{"ID":"wilks-1993-developments","url":"https:\/\/aclanthology.org\/1993.tc-1.1","title":"Developments in machine translation research in the US","abstract":"The paper argues that the IBM statistical approach to machine translation has done rather better after a few years than many sceptics believed it could. However, it is neither as novel as its proponents suggest nor is it making claims as clear and simple as they would have us believe. The performance of the purely statistical system (and we discuss what that phrase could mean) has not equalled the performance of SYSTRAN. More importantly, the system is now being shifted to a hybrid that incorporates much of the linguistic information that it was initially claimed by IBM would not be needed for MT. Hence, one might infer that its own proponent do not believe \"pure\" statistics sufficient for MT of a usable quality. In addition to real limits on the statistical method, there are also strong economic limits imposed by their methodology of data gathering. However, the paper concludes that the IBM group have done the field a great service in pushing these methods far further than before, and by reminding everyone of the virtues of empiricism in the field and the need for large scale gathering of data.","label_nlp4sg":1,"task":[],"method":[],"goal1":"Industry, Innovation and Infrastructure","goal2":null,"goal3":null,"acknowledgments":"Acknowledgements: James Pustejovsky, Bob Ingria, Bran Boguraev, Sergei Nirenburg, Ted Dunning and others in the CRL natural language processing group.","year":1993,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":1,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} +{"ID":"ohashi-etal-2020-tiny","url":"https:\/\/aclanthology.org\/2020.coling-main.103","title":"Tiny Word Embeddings Using Globally Informed Reconstruction","abstract":"We reduce the model size of pre-trained word embeddings by a factor of 200 while preserving its quality. Previous studies in this direction created a smaller word embedding model by reconstructing pre-trained word representations from those of subwords, which allows to store only a smaller number of subword embeddings in the memory. However, previous studies that train the reconstruction models using only target words cannot reduce the model size extremely while preserving its quality. Inspired by the observation of words with similar meanings having similar embeddings, our reconstruction training learns the global relationships among words, which can be employed in various models for word embedding reconstruction. Experimental results on word similarity benchmarks show that the proposed method improves the performance of the all subword-based reconstruction models.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2020,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} +{"ID":"li-etal-2015-improving-event","url":"https:\/\/aclanthology.org\/W15-4502","title":"Improving Event Detection with Abstract Meaning Representation","abstract":"Event Detection (ED) aims to identify instances of specified types of events in text, which is a crucial component in the overall task of event extraction. The commonly used features consist of lexical, syntactic, and entity information, but the knowledge encoded in the Abstract Meaning Representation (AMR) has not been utilized in this task. AMR is a semantic formalism in which the meaning of a sentence is encoded as a rooted, directed, acyclic graph. In this paper, we demonstrate the effectiveness of AMR to capture and represent the deeper semantic contexts of the trigger words in this task. Experimental results further show that adding AMR features on top of the traditional features can achieve 67.8% (with 2.1% absolute improvement) F-measure (F 1), which is comparable to the state-of-the-art approaches.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2015,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} +{"ID":"jha-etal-2018-bag","url":"https:\/\/aclanthology.org\/N18-3019","title":"Bag of Experts Architectures for Model Reuse in Conversational Language Understanding","abstract":"Slot tagging, the task of detecting entities in input user utterances, is a key component of natural language understanding systems for personal digital assistants. Since each new domain requires a different set of slots, the annotation costs for labeling data for training slot tagging models increases rapidly as the number of domains grow. To tackle this, we describe Bag of Experts (BoE) architectures for model reuse for both LSTM and CRF based models. Extensive experimentation over a dataset of 10 domains drawn from data relevant to our commercial personal digital assistant shows that our BoE models outperform the baseline models with a statistically significant average margin of 5.06% in absolute F1score when training with 2000 instances per domain, and achieve an even higher improvement of 12.16% when only 25% of the training data is used.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"We would like to thank Ahmed El Kholy for his comments and feedback on an earlier version of this paper. Also, thanks to Kyle Williams and Zhaleh Feizollahi for their help with code and data collection.","year":2018,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} +{"ID":"shaprin-etal-2019-team","url":"https:\/\/aclanthology.org\/S19-2176","title":"Team Jack Ryder at SemEval-2019 Task 4: Using BERT Representations for Detecting Hyperpartisan News","abstract":"We describe the system submitted by the Jack Ryder team to SemEval-2019 Task 4 on Hyperpartisan News Detection. The task asked participants to predict whether a given article is hyperpartisan, i.e., extreme-left or extremeright. We propose an approach based on BERT with fine-tuning, which was ranked 7th out 28 teams on the distantly supervised dataset, where all articles from a hyperpartisan\/nonhyperpartisan news outlet are considered to be hyperpartisan\/non-hyperpartisan. On a manually annotated test dataset, where human annotators double-checked the labels, we were ranked 29th out of 42 teams.","label_nlp4sg":1,"task":[],"method":[],"goal1":"Peace, Justice and Strong Institutions","goal2":null,"goal3":null,"acknowledgments":null,"year":2019,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":1,"sdg17":0} +{"ID":"mugelli-etal-2017-designing","url":"https:\/\/aclanthology.org\/W17-7011","title":"Designing an Ontology for the Study of Ritual in Ancient Greek Tragedy","abstract":"We examine the use of an ontology within the context of a system for the annotation and querying of ancient Greek tragic texts. This ontology in question results from the reorganisation of a tagset that was originally used in the annotation of a corpus of tragic texts for salient information regarding ritual and religion and its representation in Greek tragedy. In the article we discuss the original tagset as as providing examples of the annotation. We also describe the structure of the ontology itself as well as its use within a system for querying the annotated corpus.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2017,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} +{"ID":"malmasi-dras-2015-language","url":"https:\/\/aclanthology.org\/W15-5407","title":"Language Identification using Classifier Ensembles","abstract":"In this paper we describe the language identification system we developed for the Discriminating Similar Languages (DSL) 2015 shared task. We constructed a classifier ensemble composed of several Support Vector Machine (SVM) base classifiers, each trained on a single feature type. Our feature types include character 1-6 grams and word unigrams and bigrams. Using this system we were able to outperform the other entries in the closed training track of the DSL 2015 shared task, achieving the best accuracy of 95.54%.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2015,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} +{"ID":"rafferty-etal-2011-exploring","url":"https:\/\/aclanthology.org\/W11-0606","title":"Exploring the Relationship Between Learnability and Linguistic Universals","abstract":"Greater learnability has been offered as an explanation as to why certain properties appear in human languages more frequently than others. Languages with greater learnability are more likely to be accurately transmitted from one generation of learners to the next. We explore whether such a learnability bias is sufficient to result in a property becoming prevalent across languages by formalizing language transmission using a linear model. We then examine the outcome of repeated transmission of languages using a mathematical analysis, a computer simulation, and an experiment with human participants, and show several ways in which greater learnability may not result in a property becoming prevalent. Both the ways in which transmission failures occur and the relative number of languages with and without a property can affect whether the relationship between learnability and prevalence holds. Our results show that simply finding a learnability bias is not sufficient to explain why a particular property is a linguistic universal, or even frequent among human languages.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"Acknowledgements. This work was supported by an NSF Graduate Research Fellowship to ANR, grant number BCS-0704034 from the NSF to TLG, and grant number T32 NS047987 from the NIH to ME.","year":2011,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} +{"ID":"susanto-etal-2016-learning","url":"https:\/\/aclanthology.org\/D16-1225","title":"Learning to Capitalize with Character-Level Recurrent Neural Networks: An Empirical Study","abstract":"In this paper, we investigate case restoration for text without case information. Previous such work operates at the word level. We propose an approach using character-level recurrent neural networks (RNN), which performs competitively compared to language modeling and conditional random fields (CRF) approaches. We further provide quantitative and qualitative analysis on how RNN helps improve truecasing.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"We would also like to thank the anonymous reviewers for their helpful comments. This work is supported by MOE Tier 1 grant SUTDT12015008.","year":2016,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} +{"ID":"bayerl-paul-2011-determines","url":"https:\/\/aclanthology.org\/J11-4004","title":"What Determines Inter-Coder Agreement in Manual Annotations? A Meta-Analytic Investigation","abstract":"Recent discussions of annotator agreement have mostly centered around its calculation and interpretation, and the correct choice of indices. Although these discussions are important, they only consider the \"back-end\" of the story, namely, what to do once the data are collected. Just as important in our opinion is to know how agreement is reached in the first place and what factors influence coder agreement as part of the annotation process or setting, as this knowledge can provide concrete guidelines for the planning and setup of annotation projects. To investigate whether there are factors that consistently impact annotator agreement we conducted a meta-analytic investigation of annotation studies reporting agreement percentages. Our meta-analysis synthesized factors reported in 96 annotation studies from three domains (word-sense disambiguation, prosodic transcriptions, and phonetic transcriptions) and was based on a total of 346 agreement indices. Our analysis identified seven factors that influence reported agreement values: annotation domain, number of categories in a coding scheme, number of annotators in a project, whether annotators received training, the intensity of annotator training, the annotation purpose, and the method used for the calculation of percentage agreements. Based on our results we develop practical recommendations for the assessment, interpretation, calculation, and reporting of coder agreement. We also briefly discuss theoretical implications for the concept of annotation quality.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2011,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} +{"ID":"li-etal-2019-oppo","url":"https:\/\/aclanthology.org\/2019.iwslt-1.2","title":"OPPO NMT System for IWSLT 2019","abstract":"This paper illustrates the OPPO's submission for IWSLT2019 text translation task Our system is based on Transformer architecture. Besides, we also study the effect of model ensembling. On the devsets of IWSLT 2019, the BLEU of our system reaches 19.94.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2019,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} +{"ID":"sinha-2007-using","url":"https:\/\/aclanthology.org\/2007.mtsummit-papers.57","title":"Using rich morphology in resolving certain Hindi-English machine translation divergence","abstract":"Identification and resolution of translation divergence (TD) is very crucial for any automated machine translation (MT) system. Although this problem has received attention of a number of MT developers, devising general strategies is hard to achieve. Solution to the language specific pairs appears to be comparatively tractable. In this paper, we present a technique that exploits the rich morphology of Hindi to identify the nature of certain divergence patterns and then invoke methods to handle the related translation divergence in Hindi to English machine translation. We have considered TDs encountered in Hindi copula sentences and those arising out of certain gaps in verb morphology.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2007,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} +{"ID":"gorman-curran-2005-approximate","url":"https:\/\/aclanthology.org\/W05-1011","title":"Approximate Searching for Distributional Similarity","abstract":"Distributional similarity requires large volumes of data to accurately represent infrequent words. However, the nearestneighbour approach to finding synonyms suffers from poor scalability. The Spatial Approximation Sample Hierarchy (SASH), proposed by Houle (2003b), is a data structure for approximate nearestneighbour queries that balances the efficiency\/approximation trade-off. We have intergrated this into an existing distributional similarity system, tripling efficiency with a minor accuracy penalty.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"We would like to thank the anonymous reviewers for their helpful feedback and corrections. This work has been supported by the Australian Research Council under Discovery Project DP0453131.","year":2005,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} +{"ID":"grishman-2011-invited","url":"https:\/\/aclanthology.org\/W11-4001","title":"INVITED TALK 1: The Knowledge Base Population Task: Challenges for Information Extraction","abstract":"The Knowledge Base Population (KBP) task, being run for the past 3 years by the U.S. National Institute of Standards and Technology, is the latest in a series of multi-site evaluations of information extraction, following in the tradition of MUC and ACE. We examine the structure of KBP, emphasizing the basic shift from sentence-by-sentence and document-by-document evaluation to corpus-based extraction and the challenges it raises for cross-sentence and cross-document processing. We consider the problems raised by the limited amount and incompleteness of the training data, and how this has been (partly) addressed through such methods as semi-supervised learning and distant supervision. We describe some of the optional tasks which have been included-rapid task adaptation (last year), temporal analysis (this year), cross-lingual extraction (planned for next year)-and others which have been suggested.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2011,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} +{"ID":"sun-etal-2019-hierarchical","url":"https:\/\/aclanthology.org\/D19-1045","title":"Hierarchical Attention Prototypical Networks for Few-Shot Text Classification","abstract":"Most of the current effective methods for text classification task are based on large-scale labeled data and a great number of parameters, but when the supervised training data are few and difficult to be collected, these models are not available. In this paper, we propose a hierarchical attention prototypical networks (HAPN) for few-shot text classification. We design the feature level, word level, and instance level multi cross attention for our model to enhance the expressive ability of semantic space. We verify the effectiveness of our model on two standard benchmark fewshot text classification datasets-FewRel and CSID, and achieve the state-of-the-art performance. The visualization of hierarchical attention layers illustrates that our model can capture more important features, words, and instances separately. In addition, our attention mechanism increases support set augmentability and accelerates convergence speed in the training stage.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"We would like to thank Sawyer Zeng and Yue Liu for providing valuable hardware support and useful advice, and thank Xuexiang Xu and Yang Bai for helping us test online FewRel dataset. This work is also supported by the National Key Research and Development Program of China (No. 2018YFB1402902 and No. 2018YFB1403002) and the Natural Science Foundation of Jiangsu Province (No. BK20151132).","year":2019,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} +{"ID":"gui-etal-2016-event","url":"https:\/\/aclanthology.org\/D16-1170","title":"Event-Driven Emotion Cause Extraction with Corpus Construction","abstract":"In this paper, we present our work in emotion cause extraction. Since there is no open dataset available, the lack of annotated resources has limited the research in this area. Thus, we first present a dataset we built using SINA city news. The annotation is based on the scheme of the W3C Emotion Markup Language. Second, we propose a 7-tuple definition to describe emotion cause events. Based on this general definition, we propose a new event-driven emotion cause extraction method using multi-kernel SVMs where a syntactical tree based approach is used to represent events in text. A convolution kernel based multikernel SVM are used to extract emotion causes. Because traditional convolution kernels do not use lexical information at the terminal nodes of syntactic trees, we modify the kernel function with a synonym based improvement. Even with very limited training data, we can still extract sufficient features for the task. Evaluations show that our approach achieves 11.6% higher F-measure compared to referenced methods. The contributions of our work include resource construction, concept definition and algorithm development.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2016,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} +{"ID":"karimi-etal-2018-extracting","url":"https:\/\/aclanthology.org\/L18-1549","title":"Extracting an English-Persian Parallel Corpus from Comparable Corpora","abstract":"Parallel data are an important part of a reliable Statistical Machine Translation (SMT) system. The more of these data are available, the better the quality of the SMT system. However, for some language pairs such as Persian-English, parallel sources of this kind are scarce. In this paper, a bidirectional method is proposed to extract parallel sentences from English and Persian document aligned Wikipedia. Two machine translation systems are employed to translate from Persian to English and the reverse after which an IR system is used to measure the similarity of the translated sentences. Adding the extracted sentences to the training data of the existing SMT systems is shown to improve the quality of the translation. Furthermore, the proposed method slightly outperforms the one-directional approach. The extracted corpus consists of about 200,000 sentences which have been sorted by their degree of similarity calculated by the IR system and is freely available for public access on the Web 1 .","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"We would like to thank our colleagues, Zahra Sepehri and Ailar Qaraie, at Iranzamin Language School for providing us with 500 sentences used in our test set.","year":2018,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} +{"ID":"milward-1994-non","url":"https:\/\/aclanthology.org\/C94-2151","title":"Non-Constituent Coordination: Theory and Practice","abstract":"ABSTR.AC'I? l)espite tile la.rge ainounl, of theoretical work done on non-coastituent coordination chu:ing the last t.wo decades, lrlany co[npitt, atiollal systems still treat co or(lination using ada.pted parsing st, rategies, in a sirlriilar fashion to the SYSCON,I system develol)ed tbr A!I'Ns. This 1)a.per reviews the i.heoretical literal;ure, a.nd shows why IIla.liy of I, he theoretical ;u:couu(.s tictualiy ]lave worse coverage than ac(;Otllt[;s based on l)ro(:e.ssing. IPiimlly> it shows how l)rocessiug a.ceounts (:IAi he described fornmlly and dcclara.tively in terlns o1' I)yna.mic (', ramma.rs.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":1994,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} +{"ID":"rehbein-van-genabith-2007-treebank","url":"https:\/\/aclanthology.org\/D07-1066","title":"Treebank Annotation Schemes and Parser Evaluation for German","abstract":"Recent studies focussed on the question whether less-configurational languages like German are harder to parse than English, or whether the lower parsing scores are an artefact of treebank encoding schemes and data structures, as claimed by K\u00fcbler et al. (2006). This claim is based on the assumption that PARSEVAL metrics fully reflect parse quality across treebank encoding schemes. In this paper we present new experiments to test this claim. We use the PARSEVAL metric, the Leaf-Ancestor metric as well as a dependency-based evaluation, and present novel approaches measuring the effect of controlled error insertion on treebank trees and parser output. We also provide extensive past-parsing crosstreebank conversion. The results of the experiments show that, contrary to K\u00fcbler et al. (2006), the question whether or not German is harder to parse than English remains undecided.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"We would like to thank the anomymous reviewers for many helpful comments. This research has been supported by a Science Foundation Ireland grant 04|IN|I527.","year":2007,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} +{"ID":"huerta-2008-relative","url":"https:\/\/aclanthology.org\/D08-1101","title":"Relative Rank Statistics for Dialog Analysis","abstract":"We introduce the relative rank differential statistic which is a non-parametric approach to document and dialog analysis based on word frequency rank-statistics. We also present a simple method to establish semantic saliency in dialog, documents, and dialog segments using these word frequency rank statistics. Applications of our technique include the dynamic tracking of topic and semantic evolution in a dialog, topic detection, automatic generation of document tags, and new story or event detection in conversational speech and text. Our approach benefits from the robustness, simplicity and efficiency of non-parametric and rank based approaches and consistently outperformed term-frequency and TF-IDF cosine distance approaches in several experiments conducted.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2008,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} +{"ID":"p-r-etal-2016-hitachi","url":"https:\/\/aclanthology.org\/S16-1191","title":"Hitachi at SemEval-2016 Task 12: A Hybrid Approach for Temporal Information Extraction from Clinical Notes","abstract":"This paper describes the system developed for the task of temporal information extraction from clinical narratives in the context of 2016 Clinical TempEval challenge. Clinical TempEval 2016 addressed the problem of temporal reasoning in clinical domain by providing annotated clinical notes and pathology reports similar to Clinical TempEval challenge 2015. The Clinical TempEval challenge consisted of six subtasks. Hitachi team participated in two time expression based subtasks: time expression span detection (TS) and time expression attribute identification (TA) for which we developed hybrid of rule-based and machine learning based methods using Stanford TokensRegex framework and Stanford Named Entity Recognizer and evaluated it on the THYME corpus. Our hybrid system achieved a maximum F-score of 0.73 for identification of time spans (TS) and 0.71 for identification of time attributes (TA).","label_nlp4sg":1,"task":[],"method":[],"goal1":"Good Health and Well-Being","goal2":null,"goal3":null,"acknowledgments":"We thank Mayo clinic and clinical TempEval organizers for providing access to THYME corpus and other helps provided for our participation in the competition.","year":2016,"sdg1":0,"sdg2":0,"sdg3":1,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} +{"ID":"sosea-caragea-2021-emlm","url":"https:\/\/aclanthology.org\/2021.acl-short.38","title":"eMLM: A New Pre-training Objective for Emotion Related Tasks","abstract":"Bidirectional Encoder Representations from Transformers (BERT) have been shown to be extremely effective on a wide variety of natural language processing tasks, including sentiment analysis and emotion detection. However, the proposed pre-training objectives of BERT do not induce any sentiment or emotion-specific biases into the model. In this paper, we present Emotion Masked Language Modeling, a variation of Masked Language Modeling, aimed at improving the BERT language representation model for emotion detection and sentiment analysis tasks. Using the same pre-training corpora as the original BERT model, Wikipedia and BookCorpus, our BERT variation manages to improve the downstream performance on 4 tasks for emotion detection and sentiment analysis by an average of 1.2% F1. Moreover, our approach shows an increased performance in our task-specific robustness tests. We make our code and pre-trained model available at https:\/\/github.com\/tsosea2\/eMLM.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"We thank our anonymous reviewers for their constructive comments and feedback. This work is partially supported by the NSF Grants IIS-1912887 and IIS-1903963. Any opinions, findings, and conclusions expressed here are those of the authors and do not necessarily reflect the views of NSF. The computation for this project was performed on Amazon Web Services through a research grant.","year":2021,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} +{"ID":"duong-etal-2014-get","url":"https:\/\/aclanthology.org\/D14-1096","title":"What Can We Get From 1000 Tokens? A Case Study of Multilingual POS Tagging For Resource-Poor Languages","abstract":"We unintentionally misrepresented Garrette et al. (2013) in the published version of this paper by stating that they required an external tag dictionary. We have corrected these inaccuracies to reflect their modest data requirements.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"We would like to thank Dan Garreette, Jason Baldridge and Noah Smith for Malagasy and Kinyarwanda datasets. This work was supported by the University of Melbourne and National ICT Australia (NICTA). NICTA is funded by the Australian Federal and Victoria State Governments, and the Australian Research Council through the ICT Centre of Excellence program. Dr Cohn is the recipient of an Australian Research Council Future Fellowship (project number FT130101105).","year":2014,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} +{"ID":"ortega-etal-2019-adviser","url":"https:\/\/aclanthology.org\/P19-3016","title":"ADVISER: A Dialog System Framework for Education \\& Research","abstract":"In this paper, we present ADVISER 1-an open source dialog system framework for education and research purposes. This system supports multi-domain task-oriented conversations in two languages. It additionally provides a flexible architecture in which modules can be arbitrarily combined or exchanged-allowing for easy switching between rules-based and neural network based implementations. Furthermore, ADVISER offers a transparent, user-friendly framework designed for interdisciplinary collaboration: from a flexible back end, allowing easy integration of new features, to an intuitive graphical user interface supporting nontechnical users.","label_nlp4sg":1,"task":[],"method":[],"goal1":"Quality Education","goal2":"Industry, Innovation and Infrastructure","goal3":null,"acknowledgments":"We would like to thank all the voluntary students at the University of Stuttgart for their participation in the evaluation. This work was funded by the Carl Zeiss Foundation.","year":2019,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":1,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":1,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} +{"ID":"kuo-etal-2010-using","url":"https:\/\/aclanthology.org\/O10-5003","title":"Using Linguistic Features to Predict Readability of Short Essays for Senior High School Students in Taiwan","abstract":"We investigated the problem of classifying short essays used in comprehension tests for senior high school students in Taiwan. The tests were for first and second year students, so the answers included only four categories, each for one semester of the first two years. A random-guess approach would achieve only 25% in accuracy for our problem. We analyzed three publicly available scores for readability, but did not find them directly applicable. By considering a wide array of features at the levels of word, sentence, and essay, we gradually improved the F measure achieved by our classifiers from 0.381 to 0.536.","label_nlp4sg":1,"task":[],"method":[],"goal1":"Quality Education","goal2":null,"goal3":null,"acknowledgments":"The work was supported in part by the funding from the National Science Council in Taiwan under the contracts NSC-97-2221-004-007, NSC-98-2815-C-004-003-E, and NSC-99-2221-004-007. The authors would like to thank Miss Min-Hua Lai for her technical support in this study and Professor Zhao-Ming Gao for his comments on an earlier report (Kuo et al., 2009) ","year":2010,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":1,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} +{"ID":"hall-nemec-2007-generation","url":"https:\/\/aclanthology.org\/W07-0408","title":"Generation in Machine Translation from Deep Syntactic Trees","abstract":"In this paper we explore a generative model for recovering surface syntax and strings from deep-syntactic tree structures. Deep analysis has been proposed for a number of language and speech processing tasks, such as machine translation and paraphrasing of speech transcripts. In an effort to validate one such formalism of deep syntax, the Praguian Tectogrammatical Representation (TR), we present a model of synthesis for English which generates surface-syntactic trees as well as strings. We propose a generative model for function word insertion (prepositions, definite\/indefinite articles, etc.) and subphrase reordering. We show by way of empirical results that this model is effective in constructing acceptable English sentences given impoverished trees.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2007,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} +{"ID":"voutilainen-1995-syntax","url":"https:\/\/aclanthology.org\/E95-1022","title":"A syntax-based part-of-speech analyser","abstract":"There are two main methodologies for constructing the knowledge base of a natural language analyser: the linguistic and the data-driven. Recent state-ofthe-art part-of-speech taggers are based on the data-driven approach. Because of the known feasibility of the linguistic rule-based approach at related levels of description, the success of the datadriven approach in part-of-speech analysis may appear surprising. In this paper, a case is made for the syntactic nature of part-of-speech tagging. A new tagger of English that uses only linguistic distributional rules is outlined and empirically evaluated. Tested against a benchmark corpus of 38,000 words of previously unseen text, this syntax-based system reaches an accuracy of above 99%. Compared to the 95-97% accuracy of its best competitors, this result suggests the feasibility of the linguistic approach also in part-of-speech analysis.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"I would like to thank Timo J\u00a3rvinen, Jussi Piitulainen, Past Tapanainen and two EACL referees for useful comments on an earlier version of this paper. The usual disclaimers hold.","year":1995,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} +{"ID":"mirovsky-etal-2012-tectogrammatics","url":"https:\/\/aclanthology.org\/C12-2083","title":"Does Tectogrammatics Help the Annotation of Discourse?","abstract":"In the following paper, we discuss and evaluate the benefits that deep syntactic trees (tectogrammatics) and all the rich annotation of the Prague Dependency Treebank bring to the process of annotating the discourse structure, i.e. discourse relations, connectives and their arguments. The decision to annotate discourse structure directly on the trees contrasts with the majority of similarly aimed projects, usually based on the annotation of linear texts. Our basic assumption is that some syntactic features of a sentence analysis correspond to certain discourselevel features. Hence, we use some properties of the dependency-based large-scale treebank of Czech to help establish an independent annotation layer of discourse. The question that we answer in the paper is how much did we gain by employing this approach.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"We gratefully acknowledge support from the Grant Agency of the Czech Republic (grants P406\/12\/0658 and P406\/2010\/0875) and from the Ministry of Education, Youth and Sports in the Czech Republic, program KONTAKT (ME10018) and the LINDAT-Clarin project (LM2010013).","year":2012,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} +{"ID":"mihalcea-etal-2004-senseval","url":"https:\/\/aclanthology.org\/W04-0807","title":"The Senseval-3 English lexical sample task","abstract":"This paper presents the task definition, resources, participating systems, and comparative results for the English lexical sample task, which was organized as part of the SENSEVAL-3 evaluation exercise. The task drew the participation of 27 teams from around the world, with a total of 47 systems.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"Many thanks to all those who contributed to the Open Mind Word Expert project, making this task possible. In particular, we are grateful to Gwen Lenker -our most productive contributor. We are also grateful to all the participants in this task, for their hard work and involvement in this evaluation exercise. Without them, all these comparative analyses would not be possible.We are indebted to the Princeton WordNet team, for making WordNet available free of charge, and to Robert Parks from Wordsmyth, for making available the verb entries used in this evaluation.We are particularly grateful to the National Science Foundation for their support under research grant IIS-0336793, and to the University of North Texas for a research grant that provided funding for contributor prizes.","year":2004,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} +{"ID":"calixto-etal-2017-human","url":"https:\/\/aclanthology.org\/W17-2004","title":"Human Evaluation of Multi-modal Neural Machine Translation: A Case-Study on E-Commerce Listing Titles","abstract":"In this paper, we study how humans perceive the use of images as an additional knowledge source to machine-translate usergenerated product listings in an e-commerce company. We conduct a human evaluation where we assess how a multi-modal neural machine translation (NMT) model compares to two text-only approaches: a conventional state-of-the-art attention-based NMT and a phrase-based statistical machine translation (PBSMT) model. We evaluate translations obtained with different systems and also discuss the data set of user-generated product listings, which in our case comprises both product listings and associated images. We found that humans preferred translations obtained with a PBSMT system to both text-only and multi-modal NMT over 56% of the time. Nonetheless, human evaluators ranked translations from a multi-modal NMT model as better than those of a text-only NMT over 88% of the time, which suggests that images do help NMT in this use-case.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"The ADAPT Centre for Digital Content Technology (www.adaptcentre.ie) at Dublin City University is funded under the Science Foundation Ireland Research Centres Programme (Grant 13\/RC\/2106) and is co-funded under the European Regional Development Fund.","year":2017,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} +{"ID":"wang-huang-2011-compound","url":"https:\/\/aclanthology.org\/Y11-1054","title":"Compound Event Nouns of the `Modifier-head' Type in Mandarin Chinese","abstract":"Event nouns can lexically encode eventive information. Recently these nouns have generated considerable scholarly interest. However, little research has been conducted in their morphological and syntactic structure, qualia modification, event representing feature, and information inheritance characteristics. This study has these main findings. 1) Morphologically, the modifier and the head is either free or bound morpheme. Syntactically the modifier is a nominal, adjectival, verbal or numeral morpheme, while the head is a nominal morpheme. 2) The modifier acts as a qualia role of the head. 3) All heads represent events, while the modifier is or is not an event. 4) The semantic information of a compound event noun can be inherited from the modifier or the head.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2011,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} +{"ID":"georgiev-etal-2009-joint","url":"https:\/\/aclanthology.org\/W09-4503","title":"A Joint Model for Normalizing Gene and Organism Mentions in Text","abstract":"The aim of gene mention normalization is to propose an appropriate canonical name, or an identifier from a popular database, for a gene or a gene product mentioned in a given piece of text. The task has attracted a lot of research attention for several organisms under the assumption that both the mention boundaries and the target organism are known. Here we extend the task to also recognizing whether the gene mention is valid and to finding the organism it is from. We solve this extended task using a joint model for gene and organism name normalization which allows for instances from different organisms to share features, thus achieving sizable performance gains with different learning methods: Na\u00efve Bayes, Maximum Entropy, Perceptron and mira, as well as averaged versions of the last two. The evaluation results for our joint classifier show F1 score of over 97%, which proves the potential of the approach.","label_nlp4sg":1,"task":[],"method":[],"goal1":"Good Health and Well-Being","goal2":null,"goal3":null,"acknowledgments":"The work reported in this paper was partially supported by the EU FP7 project 215535 LarKC.","year":2009,"sdg1":0,"sdg2":0,"sdg3":1,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} +{"ID":"mahesh-etal-1997-flaunt","url":"https:\/\/aclanthology.org\/1997.tmi-1.1","title":"If you have it, flaunt it: using full ontological knowledge for word sense disambiguation","abstract":"Word sense disambiguation continues to be a difficult problem in natural language processing. Current methods, such as marker passing and spreading activation, for applying world knowledge in the form of selectional preferences to solve this problem do not make effective use of available knowledge. Moreover, their effectiveness decreases as the knowledge is made richer by acquiring more and more conceptual relationships. Effective resolution of word sense ambiguities requires inferring the dynamic context in processing a sentence in order to find the right selectional preferences to be applied. In this article, we propose such an inference operator and show how it finds the most specific context to resolve word sense ambiguities in the Mikrokosmos semantic analyzer. Our method retains its effectiveness even in a rich, large-scale knowledge base with a high degree of connectivity among its concepts.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":1997,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} +{"ID":"kwon-etal-2020-hierarchical","url":"https:\/\/aclanthology.org\/2020.coling-main.424","title":"Hierarchical Trivia Fact Extraction from Wikipedia Articles","abstract":"Recently, automatic trivia fact extraction has attracted much research interest. Modern search engines have begun to provide trivia facts as the information for entities because they can motivate more user engagement. In this paper, we propose a new unsupervised algorithm that automatically mines trivia facts for a given entity. Unlike previous studies, the proposed algorithm targets at a single Wikipedia article and leverages its hierarchical structure via top-down processing. Thus, the proposed algorithm offers two distinctive advantages: it does not incur high computation time, and it provides a domain-independent approach for extracting trivia facts. Experimental results demonstrate that the proposed algorithm is over 100 times faster than the existing method which considers Wikipedia categories. Human evaluation demonstrates that the proposed algorithm can mine better trivia facts regardless of the target entity domain and outperforms the existing methods.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2020,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} +{"ID":"rykova-werner-2019-perceptual","url":"https:\/\/aclanthology.org\/W19-6127","title":"Perceptual and acoustic analysis of voice similarities between parents and young children","abstract":"Human voice provides the means for verbal communication and forms a part of personal identity. Due to genetic and environmental factors, a voice of a child should resemble the voice of her parent(s), but voice similarities between parents and young children are underresearched. Read-aloud speech of Finnish-speaking and Russian-speaking parent-child pairs was subject to perceptual and multi-step instrumental and statistical analysis. Finnish-speaking listeners could not discriminate family pairs auditorily in an XAB paradigm, but the Russian-speaking listeners' mean accuracy of answers reached 72.5%. On average, in both language groups family-internal f0 similarities were stronger than family-external, with parents showing greater family-internal similarities than children. Auditory similarities did not reflect acoustic similarities in a straightforward way.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2019,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} +{"ID":"silverman-1989-microphone","url":"https:\/\/aclanthology.org\/H89-2063","title":"A Microphone Array System for Speech Recognition","abstract":"The ultimate speech recognizer cannot use an attached or desk-mounted microphone. Array techniques offer the opportunity to free a talker from microphone incumberance. My goal is to develop algorithms and systems for this purpose.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":1989,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} +{"ID":"furuse-iida-1992-cooperation","url":"https:\/\/aclanthology.org\/C92-2097","title":"Cooperation between Transfer and Analysis in Example-Based Framework","abstract":"Transfer-Driven Machine Translation (TDMT) is presented as a method which drives the translation processes according to the nature of the input. In TDMT, transfer knowledge is the central knowledge of translation, and various kinds aml levels of knowledge are cooperatively applied to input sentences. TDMT effectively utilizes an example-based framework for transfer and analysis knowledge. A consistent framework of examples makes the cooperation between transfer and analysis effective, and efficient translation is achieved. The TDMT prototype system, which translates Japanese spoken dialogs into English, has shown great promise.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"1 would like to thank the members of the ATR Interpreting Telephony Research Laboratories for their comments on various parts of this research. Special thanks are due to Dr. Kohei Habara, the chairman of the board of ATR Interpreting Telephony Research Laboratories. Dr. Akira Kurematsu, the president of ATR Interpreting Telephony Research Laboratories, for their support of this research.","year":1992,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} +{"ID":"henrich-etal-2012-webcage","url":"https:\/\/aclanthology.org\/E12-1039","title":"WebCAGe -- A Web-Harvested Corpus Annotated with GermaNet Senses","abstract":"This paper describes an automatic method for creating a domain-independent senseannotated corpus harvested from the web. As a proof of concept, this method has been applied to German, a language for which sense-annotated corpora are still in short supply. The sense inventory is taken from the German wordnet GermaNet. The web-harvesting relies on an existing mapping of GermaNet to the German version of the web-based dictionary Wiktionary. The data obtained by this method constitute WebCAGe (short for: Web-Harvested Corpus Annotated with GermaNet Senses), a resource which currently represents the largest sense-annotated corpus available for German. While the present paper focuses on one particular language, the method as such is language-independent.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"The research reported in this paper was jointly funded by the SFB 833 grant of the DFG and by the CLARIN-D grant of the BMBF. We would like to thank Christina Hoppermann, Marie Hinrichs as well as three anonymous EACL 2012 reviewers for their helpful comments on earlier versions of this paper. We are very grateful to Rein-hild Barkey, Sarah Schulz, and Johannes Wahle for their help with the evaluation reported in Section 5. Special thanks go to Yana Panchenko and Yannick Versley for their support with the webcrawler and to Emanuel Dima and Klaus Suttner for helping us to obtain the Gutenberg and Wikipedia texts.","year":2012,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} +{"ID":"muller-etal-2000-inducing","url":"https:\/\/aclanthology.org\/P00-1029","title":"Inducing Probabilistic Syllable Classes Using Multivariate Clustering","abstract":"An approach to automatic detection of syllable structure is presented. We demonstrate a novel application of EM-based clustering to multivariate data, exempli ed by the induction of 3-and 5-dimensional probabilistic syllable classes. The qualitative evaluation shows that the method yields phonologically meaningful syllable classes. We then propose a novel approach to grapheme-to-phoneme conversion and show that syllable structure represents valuable information for pronunciation systems.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2000,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} +{"ID":"xiao-etal-2021-ernie","url":"https:\/\/aclanthology.org\/2021.naacl-main.136","title":"ERNIE-Gram: Pre-Training with Explicitly N-Gram Masked Language Modeling for Natural Language Understanding","abstract":"Coarse-grained linguistic information, such as named entities or phrases, facilitates adequately representation learning in pre-training. Previous works mainly focus on extending the objective of BERT's Masked Language Modeling (MLM) from masking individual tokens to contiguous sequences of n tokens. We argue that such contiguously masking method neglects to model the intra-dependencies and interrelation of coarse-grained linguistic information. As an alternative, we propose ERNIE-Gram, an explicitly n-gram masking method to enhance the integration of coarse-grained information into pre-training. In ERNIE-Gram, n-grams are masked and predicted directly using explicit n-gram identities rather than contiguous sequences of n tokens. Furthermore, ERNIE-Gram employs a generator model to sample plausible n-gram identities as optional n-gram masks and predict them in both coarsegrained and fine-grained manners to enable comprehensive n-gram prediction and relation modeling. We pre-train ERNIE-Gram on English and Chinese text corpora and finetune on 19 downstream tasks. Experimental results show that ERNIE-Gram outperforms previous pre-training models like XLNet and RoBERTa by a large margin, and achieves comparable results with state-of-the-art methods. The source codes and pre-trained models have been released at https:\/\/github. com\/PaddlePaddle\/ERNIE.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"We would like to thank Zhen Li for his constructive suggestions, and hope everything goes well with his work. We are also indebted to the NAACL-HLT reviewers for their detailed and insightful comments on our work.","year":2021,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} +{"ID":"weller-di-marco-2017-simple","url":"https:\/\/aclanthology.org\/W17-1722","title":"Simple Compound Splitting for German","abstract":"This paper presents a simple method for German compound splitting that combines a basic frequency-based approach with a form-to-lemma mapping to approximate morphological operations. With the exception of a small set of hand-crafted rules for modeling transitional elements, our approach is resource-poor. In our evaluation, the simple splitter outperforms a splitter relying on rich morphological resources.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"This project has received funding from the Euro- ","year":2017,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} +{"ID":"chen-etal-2013-human","url":"https:\/\/aclanthology.org\/I13-1182","title":"Human-Computer Interactive Chinese Word Segmentation: An Adaptive Dirichlet Process Mixture Model Approach","abstract":"Previous research shows that Kalman filter based human-computer interactive Chinese word segmentation achieves an encouraging effect in reducing user interventions, but suffers from the drawback of incompetence in distinguishing segmentation ambiguities. This paper proposes a novel approach to handle this problem by using an adaptive Dirichlet process mixture model. By adjusting the hyperparameters of the model, ideal classifiers can be generated to conform to the interventions provided by the users. Experiments reveal that our approach achieves a notable improvement in handling segmentation ambiguities. With knowledge learnt from users, our model outperforms the baseline Kalman filter model by about 0.5% in segmenting homogeneous texts.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"We would like to thank Professor Sujian Li for her valuable advice on writing this paper. This work is partially supported by Open Project Program of the National Laboratory of Pattern Recognition (NLPR) and the Opening Project of Beijing Key Laboratory of Internet Culture and Digital Dissemination Research (ICDD201102).","year":2013,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} +{"ID":"bondale-sreenivas-2012-emotiphons","url":"https:\/\/aclanthology.org\/W12-5308","title":"Emotiphons: Emotion Markers in Conversational Speech - Comparison across Indian Languages","abstract":"In spontaneous speech, emotion information is embedded at several levels: acoustic, linguistic, gestural (non-verbal), etc. For emotion recognition in speech, there is much attention to acoustic level and some attention at the linguistic level. In this study, we identify paralinguistic markers for emotion in the language. We study two Indian languages belonging to two distinct language families. We consider Marathi from Indo-Aryan and Kannada from Dravidian family. We show that there exist large numbers of specific paralinguistic emotion markers in these languages, referred to as emotiphons. They are intertwined with prosody and semantics. Preprocessing of speech signal with respect to emotiphons would facilitate emotion recognition in speech for Indian languages. Some of them are common between the two languages, indicating cultural influence in language usage.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2012,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} +{"ID":"simard-1999-text","url":"https:\/\/aclanthology.org\/W99-0602","title":"Text-Translation Alignment: Three Languages Are Better Than Two","abstract":"In this article, we show how a bilingual texttranslation alignment method can be adapted to deal with more than two versions of a text. Experiments on a trilingual corpus demonstrate that this method yields better bilingual alignments than can be obtained with bilingual textalignment methods. Moreover, for a given number of texts, the computational complexity of the multilingual method is the same as for bilingual alignment.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"Many of the ideas expressed here emerged from informal exchanges with Fathi Debili and Pierre Isabelle; I am greatly indebted to both for their constant support throughout this project. I also wish to thank the anonymous reviewers for their constructive comments on the paper. ","year":1999,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} +{"ID":"huang-etal-2020-texthide","url":"https:\/\/aclanthology.org\/2020.findings-emnlp.123","title":"TextHide: Tackling Data Privacy in Language Understanding Tasks","abstract":"An unsolved challenge in distributed or federated learning is to effectively mitigate privacy risks without slowing down training or reducing accuracy. In this paper, we propose Tex-tHide aiming at addressing this challenge for natural language understanding tasks. It requires all participants to add a simple encryption step to prevent an eavesdropping attacker from recovering private text data. Such an encryption step is efficient and only affects the task performance slightly. In addition, Tex-tHide fits well with the popular framework of fine-tuning pre-trained language models (e.g., BERT) for any sentence or sentence-pair task. We evaluate TextHide on the GLUE benchmark, and our experiments show that TextHide can effectively defend attacks on shared gradients or representations and the averaged accuracy reduction is only 1.9%. We also present an analysis of the security of TextHide using a conjecture about the computational intractability of a mathematical problem. 1","label_nlp4sg":1,"task":[],"method":[],"goal1":"Peace, Justice and Strong Institutions","goal2":null,"goal3":null,"acknowledgments":"This project is supported in part by the Graduate Fellowship at Princeton University, Ma Huateng Foundation, Schmidt Foundation, Simons Foundation, NSF, DARPA\/SRC, Google and Amazon AWS. Arora and Song were at the Institute for Advanced Study during this research.","year":2020,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":1,"sdg17":0} +{"ID":"nn-1960-questions-discussion-10","url":"https:\/\/aclanthology.org\/1960.earlymt-nsmt.58","title":"Questions and Discussion 10","abstract":"The statement was implied that, with the aid of compilers, a linguist who did not know the machine would be able to sit down and write his program in such a way that he would have a successful running program.\nOur experience with automatic programming in the area of scientific programming seems to indicate that the man has to know the machine, otherwise he is going to get himself into a lot of trouble.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":1960,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} +{"ID":"li-etal-2020-mapping","url":"https:\/\/aclanthology.org\/2020.acl-main.729","title":"Mapping Natural Language Instructions to Mobile UI Action Sequences","abstract":"We present a new problem: grounding natural language instructions to mobile user interface actions, and create three new datasets for it. For full task evaluation, we create PIX-ELHELP, a corpus that pairs English instructions with actions performed by people on a mobile UI emulator. To scale training, we decouple the language and action data by (a) annotating action phrase spans in HowTo instructions and (b) synthesizing grounded descriptions of actions for mobile user interfaces. We use a Transformer to extract action phrase tuples from long-range natural language instructions. A grounding Transformer then contextually represents UI objects using both their content and screen position and connects them to object descriptions. Given a starting screen and instruction, our model achieves 70.59% accuracy on predicting complete ground-truth action sequences in PIXELHELP.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"We would like to thank our anonymous reviewers for their insightful comments that improved the paper. Many thanks to the Google Data Compute team, especially Ashwin Kakarla and Muqthar Mohammad for their help with the annotations, and Song Wang, Justin Cui and Christina Ou for their help on early data preprocessing.","year":2020,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} +{"ID":"meng-etal-2021-empirical","url":"https:\/\/aclanthology.org\/2021.naacl-main.396","title":"An Empirical Study on Neural Keyphrase Generation","abstract":"Recent years have seen a flourishing of neural keyphrase generation (KPG) works, including the release of several large-scale datasets and a host of new models to tackle them. Model performance on KPG tasks has increased significantly with evolving deep learning research. However, there lacks a comprehensive comparison among different model designs, and a thorough investigation on related factors that may affect a KPG system's generalization performance. In this empirical study, we aim to fill this gap by providing extensive experimental results and analyzing the most crucial factors impacting the generalizability of KPG models. We hope this study can help clarify some of the uncertainties surrounding the KPG task and facilitate future research on this topic.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"RM was supported by the Amazon Research Awards for the project \"Transferable, Controllable, Applicable Keyphrase Generation\". This research was partially supported by the University of Pittsburgh Center for Research Computing through the resources provided. The authors thank the anonymous NAACL reviewers for their helpful feedback and suggestions.","year":2021,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} +{"ID":"chae-2013-myths","url":"https:\/\/aclanthology.org\/Y13-1054","title":"Myths in Korean Morphology and Their Computational Implications","abstract":"This paper examines some popular misanalyses in Korean morphology. For example, contrary to popular myth, the verbal ha-and the element-(nu)n-cannot be analyzed as a derivational affix and as a present tense marker, respectively. We will see that ha-is an independent word and that-(nu)n-is part of a portmanteau morph. In providing reasonable analyses of them, we will consider some computational implications of the misanalyses. It is really mysterious that such wrong analyses can become so popular in a scientific field of linguistics.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"We are thankful to the anonymous reviewers, whose valuable comments have been very helpful in improving the quality of this paper. This work was supported by a 2013 research grant from Hankuk University of Foreign Studies.","year":2013,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} +{"ID":"volk-2006-bad","url":"https:\/\/aclanthology.org\/W06-2112","title":"How Bad is the Problem of PP-Attachment? A Comparison of English, German and Swedish","abstract":"The correct attachment of prepositional phrases (PPs) is a central disambiguation problem in parsing natural languages. This paper compares the baseline situation in English, German and Swedish based on manual PP attachments in various treebanks for these languages. We argue that cross-language comparisons of the disambiguation results in previous research is impossible because of the different selection procedures when building the training and test sets. We perform uniform treebank queries and show that English has the highest noun attachment rate followed by Swedish and German. We also show that the high rate in English is dominated by the preposition of. From our study we derive a list of criteria for profiling data sets for PP attachment experiments.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2006,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} +{"ID":"zosa-granroth-wilding-2019-multilingual","url":"https:\/\/aclanthology.org\/R19-1159","title":"Multilingual Dynamic Topic Model","abstract":"Dynamic topic models (DTMs) capture the evolution of topics and trends in time series data. Current DTMs are applicable only to monolingual datasets. In this paper we present the multilingual dynamic topic model (ML-DTM), a novel topic model that combines DTM with an existing multilingual topic modeling method to capture crosslingual topics that evolve across time. We present results of this model on a parallel German-English corpus of news articles and a comparable corpus of Finnish and Swedish news articles. We demonstrate the capability of ML-DTM to track significant events related to a topic and show that it finds distinct topics and performs as well as existing multilingual topic models in aligning cross-lingual topics.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"This work has been supported by the European Union's Horizon 2020 research and innovation programme under grants 770299 (NewsEye) and 825153 (EMBEDDIA).","year":2019,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} +{"ID":"milewski-etal-2020-scene","url":"https:\/\/aclanthology.org\/2020.aacl-main.50","title":"Are Scene Graphs Good Enough to Improve Image Captioning?","abstract":"Many top-performing image captioning models rely solely on object features computed with an object detection model to generate image descriptions. However, recent studies propose to directly use scene graphs to introduce information about object relations into captioning, hoping to better describe interactions between objects. In this work, we thoroughly investigate the use of scene graphs in image captioning. We empirically study whether using additional scene graph encoders can lead to better image descriptions and propose a conditional graph attention network (C-GAT), where the image captioning decoder state is used to condition the graph updates. Finally, we determine to what extent noise in the predicted scene graphs influence caption quality. Overall, we find no significant difference between models that use scene graph features and models that only use object detection features across different captioning metrics, which suggests that existing scene graph generation models are still too noisy to be useful in image captioning. Moreover, although the quality of predicted scene graphs is very low in general, when using high quality scene graphs we obtain gains of up to 3.3 CIDEr compared to a strong Bottom-Up Top-Down baseline. 1","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"We would like to thank the COST Action CA18231 for funding a research visit to collaborate on this project. This work is funded by the European Research Council (ERC) under the ERC Advanced Grant 788506. IC has received funding from the European Union's Horizon 2020 research and innovation program under the Marie Sk\u0142odowska-Curie grant agreement No 838188.","year":2020,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} +{"ID":"declerck-etal-2018-integrated","url":"https:\/\/aclanthology.org\/L18-1094","title":"An Integrated Formal Representation for Terminological and Lexical Data included in Classification Schemes","abstract":"This paper presents our work dealing with a potential application in e-lexicography: the automatized creation of specialized multilingual dictionaries from structured data, which are available in the form of comparable multilingual classification schemes or taxonomies. As starting examples, we use comparable industry classification schemes, which frequently occur in the context of stock exchanges and business reports. Initially, we planned to follow an approach based on cross-taxonomies and cross-languages string mapping to automatically detect candidate multilingual dictionary entries for this specific domain. However, the need to first transform the comparable classification schemes into a shared formal representation language in order to be able to properly align their components before implementing the algorithms for the multilingual lexicon extraction soon became apparent. We opted for the SKOS-XL vocabulary for modelling the multilingual terminological part of the comparable taxonomies and for OntoLex-Lemon for modelling the multilingual lexical entries which can be extracted from the original data. In this paper, we present the suggested modelling architecture, which demonstrates how terminological elements and lexical items can be formally integrated and explicitly cross-linked in the context of the Linguistic Linked Open Data (LLOD).","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2018,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} +{"ID":"vincze-2013-weasels","url":"https:\/\/aclanthology.org\/I13-1044","title":"Weasels, Hedges and Peacocks: Discourse-level Uncertainty in Wikipedia Articles","abstract":"Uncertainty is an important linguistic phenomenon that is relevant in many areas of language processing. While earlier research mostly concentrated on the semantic aspects of uncertainty, here we focus on discourse-and pragmaticsrelated aspects of uncertainty. We present a classification of such linguistic phenomena and introduce a corpus of Wikipedia articles in which the presented types of discourse-level uncertainty-weasel, hedge and peacock-have been manually annotated. We also discuss some experimental results on discourse-level uncertainty detection.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"This work was supported in part by the European Union and the European Social Fund through the project FuturICT.hu (grant no.: T\u00c1MOP-4.2.2.C-11\/1\/KONV-2012-0013).","year":2013,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} +{"ID":"church-1988-stochastic","url":"https:\/\/aclanthology.org\/A88-1019","title":"A Stochastic Parts Program and Noun Phrase Parser for Unrestricted Text","abstract":"It is well-known that part of speech depends on context. The word \"table,\" for example, can be a verb in some contexts (e.g., \"He will table the motion\") and a noun in others (e.g., \"The table is ready\"). A program has been written which tags each word in an input sentence with the most likely part of speech. The program produces the following output for the two \"table\" sentences just mentioned:\n\u2022 He\/PPS will\/lVlD table\/VB the\/AT motion\/NN .\/.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":1988,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} +{"ID":"marimon-etal-2017-annotation","url":"https:\/\/aclanthology.org\/W17-1807","title":"Annotation of negation in the IULA Spanish Clinical Record Corpus","abstract":"This paper presents the IULA Spanish Clinical Record Corpus, a corpus of 3,194 sentences extracted from anonymized clinical records and manually annotated with negation markers and their scope. The corpus was conceived as a resource to support clinical text-mining systems, but it is also a useful resource for other Natural Language Processing systems handling clinical texts: automatic encoding of clinical records, diagnosis support, term extraction, among others, as well as for the study of clinical texts. The corpus is publicly available with a CC-BY-SA 3.0 license.","label_nlp4sg":1,"task":[],"method":[],"goal1":"Good Health and Well-Being","goal2":null,"goal3":null,"acknowledgments":"We want to acknowledge the support of Dra. Pilar Bel-Rafecas, clinician, and the comments and suggestions of the two anonymous reviewers that have contributed to improve the final version of this paper. This work was partially supported by the project TUNER (TIN2015-65308-C5-1-R, MINECO\/FEDER)","year":2017,"sdg1":0,"sdg2":0,"sdg3":1,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} +{"ID":"giovanni-moller-etal-2020-nlp","url":"https:\/\/aclanthology.org\/2020.wnut-1.44","title":"NLP North at WNUT-2020 Task 2: Pre-training versus Ensembling for Detection of Informative COVID-19 English Tweets","abstract":"With the COVID-19 pandemic raging worldwide since the beginning of the 2020 decade, the need for monitoring systems to track relevant information on social media is vitally important. This paper describes our submission to the WNUT-2020 Task 2: Identification of informative COVID-19 English Tweets. We investigate the effectiveness for a variety of classification models, and found that domainspecific pre-trained BERT models lead to the best performance. On top of this, we attempt a variety of ensembling strategies, but these attempts did not lead to further improvements. Our final best model, the standalone CT-BERT model, proved to be highly competitive, leading to a shared first place in the shared task. Our results emphasize the importance of domain and task-related pre-training. 1","label_nlp4sg":1,"task":[],"method":[],"goal1":"Good Health and Well-Being","goal2":null,"goal3":null,"acknowledgments":"We would like to thank the organizers for this shared task. Part of this research is supported by a grant from Danmarks Frie Forskningsfond (9063-00077B).","year":2020,"sdg1":0,"sdg2":0,"sdg3":1,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} +{"ID":"lui-cook-2013-classifying","url":"https:\/\/aclanthology.org\/U13-1003","title":"Classifying English Documents by National Dialect","abstract":"We investigate national dialect identification, the task of classifying English documents according to their country of origin. We use corpora of known national origin as a proxy for national dialect. In order to identify general (as opposed to corpus-specific) characteristics of national dialects of English, we make use of a variety of corpora of different sources, with inter-corpus variation in length, topic and register. The central intuition is that features that are predictive of national origin across different data sources are features that characterize a national dialect. We examine a number of classification approaches motivated by different areas of research, and evaluate the performance of each method across 3 national dialects: Australian, British, and Canadian English. Our results demonstrate that there are lexical and syntactic characteristics of each national dialect that are consistent across data sources.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2013,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} +{"ID":"hassan-etal-2008-tracking","url":"https:\/\/aclanthology.org\/C08-1040","title":"Tracking the Dynamic Evolution of Participants Salience in a Discussion","abstract":"We introduce a technique for analyzing the temporal evolution of the salience of participants in a discussion. Our method can dynamically track how the relative importance of speakers evolve over time using graph based techniques. Speaker salience is computed based on the eigenvector centrality in a graph representation of participants in a discussion. Two participants in a discussion are linked with an edge if they use similar rhetoric. The method is dynamic in the sense that the graph evolves over time to capture the evolution inherent to the participants salience. We used our method to track the salience of members of the US Senate using data from the US Congressional Record. Our analysis investigated how the salience of speakers changes over time. Our results show that the scores can capture speaker centrality in topics as well as events that result in change of salience or influence among different participants.","label_nlp4sg":1,"task":[],"method":[],"goal1":"Partnership for the goals","goal2":null,"goal3":null,"acknowledgments":"This paper is based upon work supported by the National Science Foundation under Grant No. 0527513, \"DHB: The dynamics of Political Representation and Political Rhetoric\". Any opinions, findings, and conclusions or recommendations expressed in this paper are those of the authors and do not necessarily reflect the views of the National Science Foundation.","year":2008,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":1} +{"ID":"swanson-etal-2013-context","url":"https:\/\/aclanthology.org\/P13-1030","title":"A Context Free TAG Variant","abstract":"We propose a new variant of Tree-Adjoining Grammar that allows adjunction of full wrapping trees but still bears only context-free expressivity. We provide a transformation to context-free form, and a further reduction in probabilistic model size through factorization and pooling of parameters. This collapsed context-free form is used to implement efficient grammar estimation and parsing algorithms. We perform parsing experiments the Penn Treebank and draw comparisons to Tree-Substitution Grammars and between different variations in probabilistic model design. Examination of the most probable derivations reveals examples of the linguistically relevant structure that our variant makes possible.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2013,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} +{"ID":"kim-etal-2019-qe","url":"https:\/\/aclanthology.org\/W19-5407","title":"QE BERT: Bilingual BERT Using Multi-task Learning for Neural Quality Estimation","abstract":"For translation quality estimation at word and sentence levels, this paper presents a novel approach based on BERT that recently has achieved impressive results on various natural language processing tasks. Our proposed model is re-purposed BERT for the translation quality estimation and uses multi-task learning for the sentence-level task and word-level subtasks (i.e., source word, target word, and target gap). Experimental results on Quality Estimation shared task of WMT19 show that our systems show competitive results and provide significant improvements over the baseline.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2019,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} +{"ID":"lyu-etal-1998-large","url":"https:\/\/aclanthology.org\/O98-1006","title":"A Large-Vocabulary Taiwanese (Min-nan) Speech Recognition System Based on Inter-syllabic Initial-Final Modeling and Lexicon-Tree Search","abstract":null,"label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":1998,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} +{"ID":"yang-etal-2020-ggp","url":"https:\/\/aclanthology.org\/2020.lrec-1.581","title":"GGP: Glossary Guided Post-processing for Word Embedding Learning","abstract":"Word embedding learning is the task to map each word into a low-dimensional and continuous vector based on a large corpus. To enhance corpus based word embedding models, researchers utilize domain knowledge to learn more distinguishable representations via joint optimization and post-processing based models. However, joint optimization based models require much training time. Existing post-processing models mostly consider semantic knowledge so that learned embedding models show less functional information. Compared with semantic knowledge sources, glossary is a comprehensive linguistic resource which contains complete semantics. Previous glossary based post-processing method only processed words occurred in the glossary, and did not distinguish multiple senses of each word. In this paper, to make better use of glossary, we utilize attention mechanism to integrate multiple sense representations which are learned respectively. With measuring similarity between word representation and combined sense representation, we aim to capture more topical and functional information. We propose GGP (Glossary Guided Post-processing word embedding) model which consists of a global post-processing function to fine-tune each word vector, and an auto-encoding model to learn sense representations, furthermore, constrains each post-processed word representation and the composition of its sense representations to be similar. We evaluate our model by comparing it with two state-of-the-art models on six word topical\/functional similarity datasets, and the results show that it outperforms competitors by an average of 4.1% across all datasets. And our model outperforms GloVe by more than 7%.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"The work was supported by PolyU Teaching Development with project code 1.61.xx.9A5V and Hong Kong Collaborative Research Fund with project code C5026-18G.","year":2020,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} +{"ID":"ebrahimi-saniee-abadeh-2012-new","url":"https:\/\/aclanthology.org\/W12-4101","title":"A New Parametric Estimation Method for Graph-based Clustering","abstract":"Relational clustering has received much attention from researchers in the last decade. In this paper we present a parametric method that employs a combination of both hard and soft clustering. Based on the corresponding Markov chain of an affinity matrix, we simulate a probability distribution on the states by defining a conditional probability for each subpopulation of states. This probabilistic model would enable us to use expectation maximization for parameter estimation. The effectiveness of the proposed approach is demonstrated on several real datasets against spectral clustering methods.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2012,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} +{"ID":"maheshwari-etal-2021-scibert","url":"https:\/\/aclanthology.org\/2021.sdp-1.17","title":"SciBERT Sentence Representation for Citation Context Classification","abstract":"This paper describes our system (IREL) for 3C-Citation Context Classification shared task of the Scholarly Document Processing Workshop at NAACL 2021 (Suchetha N Kunnath and Knoth, 2021). We participated in both subtask A and subtask B. Our best system achieved a Macro F1 score of 0.26973 on the private leaderboard for subtask A and was ranked one. For subtask B our best system achieved a Macro F1 score of 0.59071 on the private leaderboard and was ranked two. We used similar models for both the subtasks with some minor changes, as discussed in this paper. Our best performing model for both the subtask was a finetuned SciBert model followed by a linear layer. We provide a detailed description of all the approaches we tried and their results. The code can be found https:\/\/github.com\/ bhavyajeet\/3c-citation_text_ classification","label_nlp4sg":1,"task":[],"method":[],"goal1":"Industry, Innovation and Infrastructure","goal2":null,"goal3":null,"acknowledgments":null,"year":2021,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":1,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} +{"ID":"chen-etal-2020-listeners","url":"https:\/\/aclanthology.org\/2020.inlg-1.26","title":"Listener's Social Identity Matters in Personalised Response Generation","abstract":"Personalised response generation enables generating human-like responses by means of assigning the generator a social identity. However, pragmatics theory suggests that human beings adjust the way of speaking based on not only who they are but also whom they are talking to. In other words, when modelling personalised dialogues, it might be favourable if we also take the listener's social identity into consideration. To validate this idea, we use gender as a typical example of a social variable to investigate how the listener's identity influences the language used in Chinese dialogues on social media. Also, we build personalised generators. The experiment results demonstrate that the listener's identity indeed matters in the language use of responses and that the response generator can capture such differences in language use. More interestingly, by additionally modelling the listener's identity, the personalised response generator performs better in its own identity.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"We thank the anonymous reviewers for their helpful comments. Guanyi Chen is supported by China Scholarship Council (No.201907720022).","year":2020,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} +{"ID":"choukri-etal-2004-network","url":"http:\/\/www.lrec-conf.org\/proceedings\/lrec2004\/pdf\/797.pdf","title":"Network of Data Centres (NetDC): BNSC - An Arabic Broadcast News Speech Corpus","abstract":"Broadcast news is a very rich source of Language Resources that has been exploited to develop and assess a large set of Human Language Technologies. Some examples include systems to: automatically produce text transcriptions of spoken data; identify the language of a text; translate a text from one language to another; identify topics in the news and retrieve all stories discussing a target topic; retrieve stories directly from the broadcast audio and extract summaries of the content of news stories. BNSC is a broadcast news speech corpus developed in the framework of the European-funded project Network of Data Centres (NetDC). The corpus contains more than 20 hours of Arabic news recordings in modern standard Arabic. The news was recorded over a period of 3 months and were transcribed in Arabic script. The project was done in corporation with the LDC (Linguistic Data Consortium), which has produced a similar corpus of its Voice of America Arabic in the United States. This paper presents the BNSC corpus production from data collection to final product.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2004,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} +{"ID":"estival-etal-2014-austalk","url":"http:\/\/www.lrec-conf.org\/proceedings\/lrec2014\/pdf\/520_Paper.pdf","title":"AusTalk: an audio-visual corpus of Australian English","abstract":"This paper describes the AusTalk corpus, which was designed and created through the Big ASC, a collaborative project with the two main goals of providing a standardised infrastructure for audiovisual recordings in Australia and of producing a large audiovisual corpus of Australian English, with 3 hours of AV recordings for 1000 speakers. We first present the overall project, then describe the corpus itself and its components, the strict data collection protocol with high levels of standardisation and automation, and the processes put in place for quality control. We also discuss the annotation phase of the project, along with its goals and challenges; a major contribution of the project has been to explore procedures for automating annotations and we present our solutions. We conclude with the current status of the corpus and with some examples of research already conducted with this new resource. AusTalk is one of the corpora included in the Alveo Virtual Lab, which is briefly sketched in the conclusion.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"We gratefully acknowledge financial and\/or in-kind assistance of the Australian Research Council (LE100100211), ASSTA; the Universities of Western Sydney, Canberra, Melbourne, NSW, Queensland, Sydney, Tasmania and Western Australia; Macquarie, Australian National, and Flinders Universities; and the Max Planck Institute for Psycholinguistics, Nijmegen.","year":2014,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} +{"ID":"lu-2007-hybrid","url":"https:\/\/aclanthology.org\/N07-1024","title":"Hybrid Models for Semantic Classification of Chinese Unknown Words","abstract":"This paper addresses the problem of classifying Chinese unknown words into fine-grained semantic categories defined in a Chinese thesaurus. We describe three novel knowledge-based models that capture the relationship between the semantic categories of an unknown word and those of its component characters in three different ways. We then combine two of the knowledge-based models with a corpus-based model which classifies unknown words using contextual information. Experiments show that the knowledge-based models outperform previous methods on the same task, but the use of contextual information does not further improve performance.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2007,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} +{"ID":"smadja-etal-1996-translating","url":"https:\/\/aclanthology.org\/J96-1001","title":"Translating Collocations for Bilingual Lexicons: A Statistical Approach","abstract":"Collocations are notoriously difficult for non-native speakers to translate, primarily because they are opaque and cannot be translated on a word-byword basis. We describe a program named Champollion which, given a pair of parallel corpora in two different languages and a list of collocations in one of them, automatically produces their translations. Our goal is to provide a tool for compiling bilingual lexical information above the word level in multiple languages,for different domains. The algorithm we use is based on statistical methods and produces p-word translations of n-word collocations in which n and p need not be the same. For example, Champollion translates make ... decision, employment equity, and stock market into prendre ... d6cision, 6quit6 en mati6re d'emploi, and bourse respectively. Testing Champollion on three years' worth of the Hansards corpus yielded the French translations of 300 collocations for each year, evaluated at 73% accuracy on average. In this paper, we describe the statistical measures used, the algorithm, and the implementation of Champollion, presenting our results and evaluation.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"This work was supported jointly by the Advanced Research Projects Agency and the Office of Naval Research under grant N00014-89-J-1782, by the Office of Naval Research under grant N00014-95-1-0745, by the National Science Foundation under grant GER-90-24069, and by the New York State Science and Technology Foundation under grants NYSSTF-CAT(91)-053 and NYSSTF-CAT(94)-013. We wish to thank Pascale Fung and Dragomir Radev for serving as evaluators, Thanasis Tsantilas for discussions relating to the average-case complexity of Champollion, and the anonymous reviewers for providing useful comments on an earlier version of the paper. We also thank Ofer Wainberg for his excellent work on improving the efficiency of Champollion and for adding the preposition extension, and Ken Church and AT&T Bell Laboratories for providing us with a prealigned Hansards corpus.","year":1996,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} +{"ID":"veselovska-hajic-jr-2013-words","url":"https:\/\/aclanthology.org\/W13-4101","title":"Why Words Alone Are Not Enough: Error Analysis of Lexicon-based Polarity Classifier for Czech","abstract":"Lexicon-based classifier is in the long term one of the main and most effective methods of polarity classification used in sentiment analysis, i.e. computational study of opinions, sentiments and emotions expressed in text (see Liu, 2010). Although it achieves relatively good results also for Czech, the classifier still shows some error rate. This paper provides a detailed analysis of such errors caused both by the system and by human reviewers. The identified errors are representatives of the challenges faced by the entire area of opinion mining. Therefore, the analysis is essential for further research in the field and serves as a basis for meaningful improvements of the system.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2013,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} +{"ID":"van-noord-bouma-2009-parsed","url":"https:\/\/aclanthology.org\/W09-0107","title":"Parsed Corpora for Linguistics","abstract":"Knowledge-based parsers are now accurate, fast and robust enough to be used to obtain syntactic annotations for very large corpora fully automatically. We argue that such parsed corpora are an interesting new resource for linguists. The argument is illustrated by means of a number of recent results which were established with the help of parsed corpora.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"This research was carried out in part in the context of the STEVIN programme which is funded by the Dutch and Flemish governments","year":2009,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} +{"ID":"biatov-kohler-2002-methods","url":"http:\/\/www.lrec-conf.org\/proceedings\/lrec2002\/pdf\/176.pdf","title":"Methods and Tools for Speech Data Acquisition exploiting a Database of German Parliamentary Speeches and Transcripts from the Internet","abstract":"This paper describes methods that exploit stenographic transcripts of the German parliament to improve the acoustic models of a speech recognition system for this domain. The stenographic transcripts and the speech data are available on the Internet. Using data from the Internet makes it possible to avoid the costly process of the collection and annotation of a huge amount of data. The automatic data acquisition technique works using the stenographic transcripts and acoustic data from the German parliamentary speeches plus general acoustic models, trained on different data. The idea of this technique is to generate special finite state automata from the stenographic transcripts. These finite state automata simulate potential possible correspondences between the stenographic transcript and the spoken audio content, i.e. accurate transcript. The first step is the recognition of the speech data using finite state automaton as a language model. The next step is to find, to extract and to verify the match between sections of recognized words and actually spoken audio content. After this, the automatically extracted and verified data can be used for acoustic model training. Experiments show that for a given recognition task from the German Parliament domain the absolute decrease of the word error rate is 20%.","label_nlp4sg":1,"task":[],"method":[],"goal1":"Peace, Justice and Strong Institutions","goal2":null,"goal3":null,"acknowledgments":"This work was funded by the German Federal Ministry for Research and Education.","year":2002,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":1,"sdg17":0} +{"ID":"hatori-suzuki-2011-japanese","url":"https:\/\/aclanthology.org\/I11-1014","title":"Japanese Pronunciation Prediction as Phrasal Statistical Machine Translation","abstract":"This paper addresses the problem of predicting the pronunciation of Japanese text. The difficulty of this task lies in the high degree of ambiguity in the pronunciation of Japanese characters and words. Previous approaches have either considered the task as a word-level classification problem based on a dictionary, which does not fare well in handling out-of-vocabulary (OOV) words; or solely focused on the pronunciation prediction of OOV words without considering the contextual disambiguation of word pronunciations in text. In this paper, we propose a unified approach within the framework of phrasal statistical machine translation (SMT) that combines the strengths of the dictionary-based and substring-based approaches. Our approach is novel in that we combine wordand character-based pronunciations from a dictionary within an SMT framework: the former captures the idiosyncratic properties of word pronunciation, while the latter provides the flexibility to predict the pronunciation of OOV words. We show that based on an extensive evaluation on various test sets, our model significantly outperforms the previous state-of-the-art systems, achieving around 90% accuracy in most domains.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"We are grateful to Graham Neubig for providing us with detailed information on KyTea, and to anonymous reviewers for useful comments.","year":2011,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} +{"ID":"cai-yates-2013-semantic","url":"https:\/\/aclanthology.org\/S13-1045","title":"Semantic Parsing Freebase: Towards Open-domain Semantic Parsing","abstract":"Existing semantic parsing research has steadily improved accuracy on a few domains and their corresponding databases. This paper introduces FreeParser, a system that trains on one domain and one set of predicate and constant symbols, and then can parse sentences for any new domain, including sentences that refer to symbols never seen during training. FreeParser uses a domain-independent architecture to automatically identify sentences relevant to each new database symbol, which it uses to supplement its manually-annotated training data from the training domain. In cross-domain experiments involving 23 domains, FreeParser can parse sentences for which it has seen comparable unannotated sentences with an F1 of 0.71.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"This material is based upon work supported by the National Science Foundation under Grant No. IIS-1218692. We wish to thank Sophia Kohlhaas and Ragine Williams for providing data for the project.","year":2013,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} +{"ID":"uzan-hacohen-kerner-2020-jct","url":"https:\/\/aclanthology.org\/2020.semeval-1.266","title":"JCT at SemEval-2020 Task 12: Offensive Language Detection in Tweets Using Preprocessing Methods, Character and Word N-grams","abstract":"In this paper, we describe our submissions to SemEval-2020 contest. We tackled subtask 12-\"Multilingual Offensive Language Identification in Social Media\". We developed different models for four languages: Arabic, Danish, Greek, and Turkish. We applied three supervised machine learning methods using various combinations of character and word n-gram features. In addition, we applied various combinations of basic preprocessing methods. Our best submission was a model we built for offensive language identification in Danish using Random Forest. This model was ranked at the 6 th position out of 39 submissions. Our result is lower by only 0.0025 than the result of the team that won the 4 th place using entirely non-neural methods. Our experiments indicate that char ngram features are more helpful than word ngram features. This phenomenon probably occurs because tweets are more characterized by characters than by words, tweets are short, and contain various special sequences of characters, e.g., hashtags, shortcuts, slang words, and typos.","label_nlp4sg":1,"task":[],"method":[],"goal1":"Peace, Justice and Strong Institutions","goal2":null,"goal3":null,"acknowledgments":null,"year":2020,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":1,"sdg17":0} +{"ID":"zhang-etal-2022-niutranss","url":"https:\/\/aclanthology.org\/2022.iwslt-1.19","title":"The NiuTrans's Submission to the IWSLT22 English-to-Chinese Offline Speech Translation Task","abstract":"This paper describes NiuTrans's submission to the IWSLT22 English-to-Chinese (En-Zh) offline speech translation task. The end-to-end and bilingual system is built by constrained English and Chinese data and translates the English speech to Chinese text without intermediate transcription. Our speech translation models are composed of different pre-trained acoustic models and machine translation models by two kinds of adapters. We compared the effect of the standard speech feature (e.g. log Mel-filterbank) and the pre-training speech feature and try to make them interact. The final submission is an ensemble of three potential speech translation models. Our single best and ensemble model achieves 18.66 BLEU and 19.35 BLEU separately on MuST-C En-Zh tst-COMMON set.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"This work was supported in part by the National Science Foundation of China (Nos. 61732005 and 61876035), the China HTRD Center Project (No. 2020AAA0107904) and Yunnan Provincial Major Science and Technology Special Plan Projects (Nos. 201902D08001905 and 202103AA080015). The authors would like to thank anonymous reviewers for their valuable comments. Thank Hao Chen and Jie Wang for processing the data.","year":2022,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} +{"ID":"toral-way-2015-translating","url":"https:\/\/aclanthology.org\/W15-0714","title":"Translating Literary Text between Related Languages using SMT","abstract":"We explore the feasibility of applying machine translation (MT) to the translation of literary texts. To that end, we measure the translatability of literary texts by analysing parallel corpora and measuring the degree of freedom of the translations and the narrowness of the domain. We then explore the use of domain adaptation to translate a novel between two related languages, Spanish and Catalan. This is the first time that specific MT systems are built to translate novels. Our best system outperforms a strong baseline by 4.61 absolute points (9.38% relative) in terms of BLEU and is corroborated by other automatic evaluation metrics. We provide evidence that MT can be useful to assist with the translation of novels between closely-related languages, namely (i) the translations produced by our best system are equal to the ones produced by a professional human translator in almost 20% of cases with an additional 10% requiring at most 5 character edits, and (ii) a complementary human evaluation shows that over 60% of the translations are perceived to be of the same (or even higher) quality by native speakers.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"This research is supported by the European Union Seventh Framework Programme FP7\/2007-2013 under grant agreement PIAP-GA-2012-324414 (Abu-MaTran) and by Science Foundation Ireland through the CNGL Programme (Grant 12\/CE\/I2267) in the ADAPT Centre (www.adaptcentre.ie) at Dublin City University.","year":2015,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} +{"ID":"zmandar-etal-2021-financial","url":"https:\/\/aclanthology.org\/2021.fnp-1.22","title":"The Financial Narrative Summarisation Shared Task FNS 2021","abstract":"This paper presents the results and findings of the Financial Narrative Summarisation Shared Task on summarising UK annual reports. The shared task was organised as part of the Financial Narrative Processing 2021 Workshop (FNP 2021 Workshop). The shared task included one main task which is the use of either abstractive or extractive automatic summarisers to summarise long documents in terms of UK financial annual reports. This shared task is the second to target financial documents. The data for the shared task was created and collected from publicly available UK annual reports published by firms listed on the London Stock Exchange. A total number of 10 systems from 5 different teams participated in the shared task. In addition, we had two baseline and two topline summarisers to help evaluate the results of the participating teams and compare them to the state-of-the-art systems.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2021,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} +{"ID":"smart-2006-smart","url":"https:\/\/aclanthology.org\/2006.claw-1.2","title":"SMART Controlled English -- Paper and Demonstration","abstract":"The trend to globalization and \"outsourcing\" presents a major linguistic challenge. This paper presents a proven methodology to use SMART Controlled English to write technical documentation for global communications. Today, large corporations must adjust their business practices to communicate more effectively across all time zones and 80 languages. The use of SMART Controlled English, when coupled with Statistical Machine Translation (SMT), will become an ideal method to cross the language barrier. Introduction: The trend to globalization presents a major linguistic challenge for large and small companies. To add to this trend, most products require a high degree of computer literacy for operation and maintenance. For example, most automobiles are welded by robots, not humans. Also, the advent of \"outsourcing\" has expanded the ring of communications. The biggest problem is that most technical manuals are not written by professional technical writers, but engineers who are the subject matter experts. Many advanced products, like those found in the telecommunications industry, update their technology every six months. Today, many cell phone (mobile phone) users in China update their handsets every four months to get new features. Unknown to most users, the information needed to control ring tones is some 250,000 pages of complex software documentation. The instructions to repair a complex jet engine can amount to more than 500,000 pages. According to Boeing, if all their aircraft manuals where printed and stacked end-to-end, the stack would reach to the top of Mt. Everest and back. These mountains of manuals are further compounded by the need for language translations. For example, companies like Microsoft and IBM localize their software and documentation in 70 languages. A small company seeking compliance to the Economic Union directives is faced with 20 languages. The expansion of both NATO and the EU adds more languages. Unfortunately, the demand for professional technical translators far exceeds the supply. What is the solution? Many companies have found that a controlled language approach can reach across the language boundaries with a common language. This paper and on-line demonstration http:\/\/www.smartny.com\/ControlledEnglish\/CLAW06 shows how to create and use a Controlled English dictionary. Examples of Controlled English ASD-STE100 Simplified Technical English This example shows the original text on the left side and the simplification for global aerospace markets. Note the use of a bulleted list instead of a dense block of text. The Simplified Technical English is easier to read, write and learn as a second language. SMART Controlled English-Telecommunications Documentation This example shows the original text on the left and the Controlled English for a telecommunications product on the right. In this example, the gobbledygook is removed and technical information is easier to find and comprehend. SMART Controlled English-Medical Devices This example shows the original text on the left and the Controlled English for a medical device on the right. In this example, the original is written by an engineer then simplified for a service technician. The Controlled English offers a 30% saving in text and later localization costs.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2006,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} +{"ID":"chiang-2004-uses","url":"https:\/\/aclanthology.org\/W04-3302","title":"Uses and abuses of intersected languages","abstract":"In this paper we discuss the use of intersection as a tool for modeling syntactic phenomena and folding of biological molecules. We argue that intersection is useful but easily overestimated, because intersection coordinates grammars via their string languages, and if strong generative capacity is given priority over weak generative capacity, this kind of coordination turns out to be rather limited. We give two example uses of intersection which overstep this limit, one using CFGs and one using a range concatenation grammar (RCG). We conclude with an analysis and example of the different kinds of parallelism available in an RCG.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"This research was supported in part by NSF ITR grant EIA-02-05456. I would like to thank Julia Hockenmaier, Laura Kallmeyer, Aravind Joshi, and the anonymous reviewers for their valuable help. S. D. G.","year":2004,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} +{"ID":"bhaskar-2013-answering","url":"https:\/\/aclanthology.org\/R13-2003","title":"Answering Questions from Multiple Documents -- the Role of Multi-Document Summarization","abstract":"Ongoing research work on Question Answering using multi-document summarization has been described. It has two main sub modules, document retrieval and Multi-document Summarization. We first preprocess the documents and then index them using Nutch with NE field. Stop words are removed and NEs are tagged from each question and all remaining question words are stemmed and then retrieve the most relevant 10 documents. Now, document graph-based query focused multidocument summarizer is used where question words are used as query. A document graph is constructed, where the nodes are sentences of the documents and edge scores reflect the correlation measure between the nodes. The system clusters similar texts from the graph using this edge score. Each cluster gets a weight and has a cluster center. Next, question dependent weights are added to the corresponding cluster score. Top two-ranked sentences of each cluster is identified in order and compressed and then fused to a single sentence. The compressed and fused sentences are included into the output summary with a limit of 500 words, which is presented as answer. The system is tested on data set of INEX QA track from 2011 to 2013 and best readability score was achieved.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"We acknowledge the support of the DeitY, MCIT, Govt. of India funded project \"Development of Cross Lingual Information Access (CLIA) System Phase II\".","year":2013,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} +{"ID":"lager-black-1994-bidirectional","url":"https:\/\/aclanthology.org\/W94-0327","title":"Bidirectional Incremental Generation and Analysis with Categorial Grammar and Indexed Quasi-Logical Form","abstract":"We describe an approach to surface generation designed for a \"pragmatics-based\" dialogue system. The implementation has been extended to deal with certain well-known difficulties with the underlying linguistic formalism (Categorial Grammar) at the same time yielding a system capable of supporting incremental generation as well as interpretation. Aspects of the formalism used for the initial description that constitutes the interface with the planning component are also discussed.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":1994,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} +{"ID":"de-santo-2021-minimalist","url":"https:\/\/aclanthology.org\/2021.scil-1.1","title":"A Minimalist Approach to Facilitatory Effects in Stacked Relative Clauses","abstract":"A top-down parser for Minimalist grammars (MGs; Stabler, 2013) can successfully predict a variety of off-line processing preferences, via metrics linking parsing behavior to memory load (Kobele et al., 2013; Gerth, 2015; Graf et al., 2017). The increasing empirical coverage of this model is intriguing, given its close association to modern minimalist syntax. Recently however, Zhang (2017) has argued that this framework is unable to account for a set of complexity profiles reported for English and Mandarin Chinese stacked relative clauses. Based on these observations, this paper proposes extensions to this model implementing a notion of memory reactivation, in the form of memory metrics sensitive to repetitions of movement features. We then show how these metrics derive the correct predictions for the stacked RC processing contrasts.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"I would like to thank Thomas Graf, Mark Aronoff, John Baylin, and Jon Sprouse for their feedback on different stages of this research. I am also grateful to the anonymous reviewer for their constructive comments and insights.","year":2021,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} +{"ID":"basili-etal-2004-a2q","url":"http:\/\/www.lrec-conf.org\/proceedings\/lrec2004\/pdf\/683.pdf","title":"A2Q: An Agent-based Architecure for Multilingual Q\\&A","abstract":"In this paper we describe the agent based architecture and extensively report the design of the shallow processing model in it. We present the general model describing the data flow and the expected activities that have to be carried out. The notion of question session will be introduced as a means to control the communication among the different agents. We then present a shallow model mainly based on an IR engine and a passage re-ranking that uses the notion of expanded query. We will report the pilot investigation on the performances of the method.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2004,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} +{"ID":"lestrade-2006-marked","url":"https:\/\/aclanthology.org\/W06-2104","title":"Marked Adpositions","abstract":"This paper discusses the partitive-genitive case alternation of Finnish adpositions. This case alternation is explained in terms of bidirectional alignment of markedness in form and meaning. Marked PP meanings are assigned partitive case, unmarked ones genitive case.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2006,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} +{"ID":"schwartz-etal-2017-effect","url":"https:\/\/aclanthology.org\/K17-1004","title":"The Effect of Different Writing Tasks on Linguistic Style: A Case Study of the ROC Story Cloze Task","abstract":"A writer's style depends not just on personal traits but also on her intent and mental state. In this paper, we show how variants of the same writing task can lead to measurable differences in writing style. We present a case study based on the story cloze task (Mostafazadeh et al., 2016a), where annotators were assigned similar writing tasks with different constraints: (1) writing an entire story, (2) adding a story ending for a given story context, and (3) adding an incoherent ending to a story. We show that a simple linear classifier informed by stylistic features is able to successfully distinguish among the three cases, without even looking at the story context. In addition, combining our stylistic features with language model predictions reaches state of the art performance on the story cloze challenge. Our results demonstrate that different task framings can dramatically affect the way people write. 1","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"The authors thank Chenhao Tan, Luke Zettlemoyer, Rik Koncel-Kedziorski, Rowan Zellers, Yangfeng Ji and several anonymous reviewers for helpful feedback. This research was supported in part by Darpa CwC program through ARO (W911NF-15-1-0543), Samsung GRO, NSF IIS-1524371, and gifts from Google and Facebook.","year":2017,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} +{"ID":"garain-etal-2020-junlp","url":"https:\/\/aclanthology.org\/2020.semeval-1.171","title":"JUNLP at SemEval-2020 Task 9: Sentiment Analysis of Hindi-English Code Mixed Data Using Grid Search Cross Validation","abstract":"Code-mixing is a phenomenon which arises mainly in multilingual societies. Multilingual people, who are well versed in their native languages and also English speakers, tend to code-mix using English-based phonetic typing and the insertion of anglicisms in their main language. This linguistic phenomenon poses a great challenge to conventional NLP domains such as Sentiment Analysis, Machine Translation, and Text Summarization, to name a few. In this work, we focus on working out a plausible solution to the domain of Code-Mixed Sentiment Analysis. This work was done as participation in the SemEval-2020 Sentimix Task, where we focused on the sentiment analysis of English-Hindi code-mixed sentences. our username for the submission was \"sainik.mahata\" and team name was \"JUNLP\". We used feature extraction algorithms in conjunction with traditional machine learning algorithms such as SVR and Grid Search in an attempt to solve the task. Our approach garnered an f1-score of 66.2% when tested using metrics prepared by the organizers of the task.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2020,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} +{"ID":"baldwin-chai-2012-autonomous","url":"https:\/\/aclanthology.org\/N12-1089","title":"Autonomous Self-Assessment of Autocorrections: Exploring Text Message Dialogues","abstract":"Text input aids such as automatic correction systems play an increasingly important role in facilitating fast text entry and efficient communication between text message users. Although these tools are beneficial when they work correctly, they can cause significant communication problems when they fail. To improve its autocorrection performance, it is important for the system to have the capability to assess its own performance and learn from its mistakes. To address this, this paper presents a novel task of self-assessment of autocorrection performance based on interactions between text message users. As part of this investigation, we collected a dataset of autocorrection mistakes from true text message users and experimented with a rich set of features in our self-assessment task. Our experimental results indicate that there are salient cues from the text message discourse that allow systems to assess their own behaviors with high precision.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"This work was supported in part by Award #0957039 from the National Science Foundation and Award #N00014-11-1-0410 from the Office of Naval Research. The authors would like to thank the reviewers for their valuable comments and suggestions.","year":2012,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} +{"ID":"tate-voss-2006-combining","url":"https:\/\/aclanthology.org\/2006.amta-papers.27","title":"Combining Evaluation Metrics via Loss Functions","abstract":"When response metrics for evaluating the utility of machine translation (MT) output on a given task do not yield a single ranking of MT engines, how are MT users to decide which engine best supports their task? When the cost of different types of response errors vary, how are MT users to factor that information into their rankings? What impact do different costs have on response-based rankings? Starting with data from an extraction experiment detailed in Voss & Tate (2006), this paper describes three response-rate metrics developed to quantify different aspects of MT users' performance identifying who\/when\/where-items in MT output, and then presents a loss function analysis over these rates to derive a single customizable metric, applying a range of values to correct responses and costs to different error types. For the given experimental dataset, loss function analyses provided a clearer characterization of the engines' relative strength than did comparing the response rates to each other. For one MT engine, varying the costs had no impact: the engine consistently ranked best. By contrast, cost variations did impact the ranking of the other two engines: a rank reversal occurred on who-item extractions when incorrect responses were penalized more than non-responses. Future work with loss analysis, developing operational cost ratios of error rates to correct response rates, will require user studies and expert documentscreening personnel to establish baseline values for effective MT engine support on wh-item extraction.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"Several individuals contributed to the task-based evaluation research project, including Eric Slud (Dept. of Mathematics, U. of Maryland, College Park), Matthew Aguirre, John Hancock (Artis-Tech, Inc.), Jamal Laoudi, Sooyon Lee (ARTI), and Somiya Shukla, Joi Turner, and Michelle Vanni (ARL). This project was funded in part by the Center for Advanced Study of Language (CASL) at the University of Maryland.","year":2006,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} +{"ID":"alagic-snajder-2016-cro36wsd","url":"https:\/\/aclanthology.org\/L16-1267","title":"Cro36WSD: A Lexical Sample for Croatian Word Sense Disambiguation","abstract":"We introduce Cro36WSD, a freely-available medium-sized lexical sample for Croatian word sense disambiguation (WSD). Cro36WSD comprises 36 words: 12 adjectives, 12 nouns, and 12 verbs, balanced across both frequency bands and polysemy levels. We adopt the multi-label annotation scheme in the hope of lessening the drawbacks of discrete sense inventories and obtaining more realistic annotations from human experts. Sense-annotated data is collected through multiple annotation rounds to ensure high-quality annotations: with a 115 person-hours effort we reached an inter-annotator agreement score of 0.877. We analyze the obtained data and perform a correlation analysis between several relevant variables, including word frequency, number of senses, sense distribution skewness, average annotation time, and the observed inter-annotator agreement (IAA). Using the obtained data, we compile multi-and single-labeled dataset variants using different label aggregation schemes. Finally, we evaluate three different baseline WSD models on both dataset variants and report on the insights gained. We make both dataset variants freely available.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"This work has been fully supported by the Croatian Science Foundation under the project UIP-2014-09-7312.","year":2016,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} +{"ID":"michael-etal-2018-crowdsourcing","url":"https:\/\/aclanthology.org\/N18-2089","title":"Crowdsourcing Question-Answer Meaning Representations","abstract":"We introduce Question-Answer Meaning Representations (QAMRs), which represent the predicate-argument structure of a sentence as a set of question-answer pairs. We develop a crowdsourcing scheme to show that QAMRs can be labeled with very little training, and gather a dataset with over 5,000 sentences and 100,000 questions. A qualitative analysis demonstrates that the crowd-generated questionanswer pairs cover the vast majority of predicate-argument relationships in existing datasets (including PropBank, Nom-Bank, and QA-SRL) along with many previously under-resourced ones, including implicit arguments and relations. We also report baseline models for question generation and answering, and summarize a recent approach for using QAMR labels to improve an Open IE system. These results suggest the freely available 1 QAMR data and annotation scheme should support significant future work. * Work performed while at Bar-Ilan University. 1 github.com\/uwnlp\/qamr Pierre Vinken, 61 years old, will join the board as a nonexecutive director Nov. 29. Who will join as nonexecutive director?-Pierre Vinken What is Pierre's last name?-Vinken Who is 61 years old?-Pierre Vinken How old is Pierre Vinken?-61 years old What will he join?-the board What will he join the board as?-nonexecutive director What type of director will Vinken be?-nonexecutive What day will Vinken join the board?-Nov. 29","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"This work was supported in part by grants from the MAGNET program of the Israeli Office of the Chief Scientist (OCS); the German Research Foundation through the German-Israeli Project Cooperation (DIP, grant DA 1600\/1-1); the Israel Science Foundation (grant No. 1157\/16); the US NSF (IIS1252835,IIS-1562364); and an Allen Distinguished Investigator Award.","year":2018,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} +{"ID":"wei-gulla-2011-enhancing","url":"https:\/\/aclanthology.org\/I11-1037","title":"Enhancing the HL-SOT Approach to Sentiment Analysis via a Localized Feature Selection Framework","abstract":"In this paper, we propose a Localized Feature Selection (LFS) framework tailored to the HL-SOT approach to sentiment analysis. Within the proposed LFS framework, each node classifier of the HL-SOT approach is able to perform classification on target texts in a locally customized index term space. Extensive empirical analysis against a human-labeled data set demonstrates that with the proposed LFS framework the classification performance of the HL-SOT approach is enhanced with computational efficiency being greatly gained. To find the best feature selection algorithm that caters to the proposed LFS framework, five classic feature selection algorithms are comparatively studied, which indicates that the TS, DF, and MI algorithms achieve generally better performances than the CHI and IG algorithms. Among the five studied algorithms, the T-S algorithm is best to be employed by the proposed LFS framework.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"The authors would like to thank the anonymous reviewers for the helpful comments on the manuscript. This work is funded by the Research Council of Norway under the VERDIKT research programme (Project No.: 183337).","year":2011,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} +{"ID":"rama-coltekin-2018-tubingen","url":"https:\/\/aclanthology.org\/K18-3014","title":"T\\\"ubingen-Oslo system at SIGMORPHON shared task on morphological inflection. A multi-tasking multilingual sequence to sequence model.","abstract":"In this paper, we describe our three submissions to the inflection track of SIGMORPHON shared task. We experimented with three models: namely, sequence to sequence model (popularly known as seq2seq), seq2seq model with data augmentation, and a multilingual multi-tasking seq2seq model that is multilingual in nature. Our results with the multilingual model are below the baseline in the case of both high and medium datasets.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"The authors thank Ryan Cotterell and the rest of the organizers for the encouragement to participate in the shared task when participating on a short notice. The first author is supported by BIGMED project (a NRC Lighthouse grant) which is gratefully acknowledged. Some of the experiments reported in this paper are run on a Titan Xp donated by the NVIDIA Corporation.","year":2018,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} +{"ID":"forcada-2002-using","url":"https:\/\/aclanthology.org\/2002.tmi-tmiw.3","title":"Using multilingual content on the web to build fast finite-state direct translation systems","abstract":"In this paper I try to identify and describe in certain detail a possible avenue of research in machine translation: the use of existing multilingual content on the web and finite-state technology to automatically build and maintain fast web-based direct machine translation systems, especially for language pairs lacking machine translation resources. The term direct is used to refer to systems performing no linguistic analysis, working similarly to pretranslators based on translation memories. Considering the current state of the art of (a) web mining for bitexts, (b) bitext alignment techniques, and (c) finite-state theory and implementation, I discuss their integration toward the stated goal and sketch some of the remaining challenges. The objective on the horizon is a web-based translation service exploiting the multilingual content already present on the web.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"Acknowledgements: Partial support from the Spanish Comisi\u00f3n Interministerial de Ciencia y Tecnologia through project TIC2000-1599-C02-02 is acknowledged. Thanks go to Juan Antonio P\u00e9rez-Ortiz for useful discussions.","year":2002,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} +{"ID":"skurniak-etal-2018-multi","url":"https:\/\/aclanthology.org\/W18-0917","title":"Multi-Module Recurrent Neural Networks with Transfer Learning","abstract":"This paper describes multiple solutions designed and tested for the problem of wordlevel metaphor detection. The proposed systems are all based on variants of recurrent neural network architectures. Specifically, we explore multiple sources of information: pretrained word embeddings (Glove), a dictionary of language concreteness and a transfer learning scenario based on the states of an encoder network from neural network machine translation system. One of the architectures is based on combining all three systems: (1) Neural CRF (Conditional Random Fields), trained directly on the metaphor data set; (2) Neural Machine Translation encoder of a transfer learning scenario; (3) a neural network used to predict final labels, trained directly on the metaphor data set. Our results vary between test sets: Neural CRF standalone is the best one on submission data, while combined system scores the highest on a test subset randomly selected from training data.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2018,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} +{"ID":"cuadros-etal-2010-integrating","url":"http:\/\/www.lrec-conf.org\/proceedings\/lrec2010\/pdf\/703_Paper.pdf","title":"Integrating a Large Domain Ontology of Species into WordNet","abstract":"With the proliferation of applications sharing information represented in multiple ontologies, the development of automatic methods for robust and accurate ontology matching will be crucial to their success. Connecting and merging already existing semantic networks is perhaps one of the most challenging task related to knowledge engineering. This paper presents a new approach for aligning automatically a very large domain ontology of Species to WordNet in the framework of the KYOTO project. The approach relies on the use of knowledge-based Word Sense Disambiguation algorithm which accurately assigns WordNet synsets to the concepts represented in Species 2000.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"This work has been supported by KNOW-2 (TIN2009-14715-C04-01 and TIN2009-14715-C04-04) and KYOTO (ICT-2007-211423). We want to thank the anonymous reviewers for their valuable comments.","year":2010,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} +{"ID":"grissom-ii-etal-2014-dont","url":"https:\/\/aclanthology.org\/D14-1140","title":"Don't Until the Final Verb Wait: Reinforcement Learning for Simultaneous Machine Translation","abstract":"We introduce a reinforcement learningbased approach to simultaneous machine translation-producing a translation while receiving input wordsbetween languages with drastically different word orders: from verb-final languages (e.g., German) to verb-medial languages (English). In traditional machine translation, a translator must \"wait\" for source material to appear before translation begins. We remove this bottleneck by predicting the final verb in advance. We use reinforcement learning to learn when to trust predictions about unseen, future portions of the sentence. We also introduce an evaluation metric to measure expeditiousness and quality. We show that our new translation model outperforms batch and monotone translation strategies.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"We thank the anonymous reviewers, as well as Yusuke Miyao, Naho Orita, Doug Oard, and Sudha Rao for their insightful comments. This work was supported by NSF Grant IIS-1320538. Boyd-Graber is also partially supported by NSF Grant CCF-1018625. Daum\u00e9 III and He are also partially supported by NSF Grant IIS-0964681. Any opinions, findings, conclusions, or recommendations expressed here are those of the authors and do not necessarily reflect the view of the sponsor.","year":2014,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} +{"ID":"song-etal-2019-leveraging","url":"https:\/\/aclanthology.org\/D19-1020","title":"Leveraging Dependency Forest for Neural Medical Relation Extraction","abstract":"Medical relation extraction discovers relations between entity mentions in text, such as research articles. For this task, dependency syntax has been recognized as a crucial source of features. Yet in the medical domain, 1best parse trees suffer from relatively low accuracies, diminishing their usefulness. We investigate a method to alleviate this problem by utilizing dependency forests. Forests contain many possible decisions and therefore have higher recall but more noise compared with 1-best outputs. A graph neural network is used to represent the forests, automatically distinguishing the useful syntactic information from parsing noise. Results on two biomedical benchmarks show that our method outperforms the standard tree-based methods, giving the state-of-the-art results in the literature.","label_nlp4sg":1,"task":[],"method":[],"goal1":"Good Health and Well-Being","goal2":null,"goal3":null,"acknowledgments":"Acknowledgments Research supported by NSF award IIS-1813823.","year":2019,"sdg1":0,"sdg2":0,"sdg3":1,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} +{"ID":"mukherjee-kubler-2017-similarity","url":"https:\/\/doi.org\/10.26615\/978-954-452-049-6_068","title":"Similarity Based Genre Identification for POS Tagging Experts \\& Dependency Parsing","abstract":null,"label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2017,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} +{"ID":"ansell-etal-2021-mad-g","url":"https:\/\/aclanthology.org\/2021.findings-emnlp.410","title":"MAD-G: Multilingual Adapter Generation for Efficient Cross-Lingual Transfer","abstract":"Adapter modules have emerged as a general parameter-efficient means to specialize a pretrained encoder to new domains. Massively multilingual transformers (MMTs) have particularly benefited from additional training of language-specific adapters. However, this approach is not viable for the vast majority of languages, due to limitations in their corpus size or compute budgets. In this work, we propose MAD-G (Multilingual ADapter Generation), which contextually generates language adapters from language representations based on typological features. In contrast to prior work, our time-and space-efficient MAD-G approach enables (1) sharing of linguistic knowledge across languages and (2) zero-shot inference by generating language adapters for unseen languages. We thoroughly evaluate MAD-G in zero-shot crosslingual transfer on part-of-speech tagging, dependency parsing, and named entity recognition. While offering (1) improved fine-tuning efficiency (by a factor of around 50 in our experiments), (2) a smaller parameter budget, and (3) increased language coverage, MAD-G remains competitive with more expensive methods for language-specific adapter training across the board. Moreover, it offers substantial benefits for low-resource languages, particularly on the NER task in low-resource African languages. Finally, we demonstrate that MAD-G's transfer performance can be further improved via: (i) multi-source training, i.e., by generating and combining adapters of multiple languages with available taskspecific training data; and (ii) by further finetuning generated MAD-G adapters for languages with monolingual data.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"Alan wishes to thank David and Claudia Harding for their generous support via the Harding Distinguished Postgraduate Scholarship Programme. Jonas is supported by the LOEWE initiative (Hesse, Germany) within the emergenCITY center. Goran is supported by the KI-Innovation grant Multi2ConvAI of the Baden-W\u00fcrttemberg's Ministry of Economics, Labor and Tourism. Anna and Ivan are supported by the ERC Grant LEXI-CAL (no. 648909) and the ERC PoC Grant Multi-ConvAI (no. 957356).","year":2021,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} +{"ID":"moreno-etal-2004-collection","url":"http:\/\/www.lrec-conf.org\/proceedings\/lrec2004\/pdf\/325.pdf","title":"Collection of SLR in the Asian-Pacific Area","abstract":"The goal of this project (LILA) is the collection of a large number of spoken databases for training Automatic Speech Recognition Systems for telephone applications in the Asian Pacific area. Specifications follow those of SpeechDat-like databases. Utterances will be recorded directly from calls made either from fixed or cellular telephones and are composed by read text and answers to specific questions. The project is driven by a consortium composed by a large number of industrial companies. Each company is in charge of the production of two databases. The consortium shares the databases produced in the project. The goal of the project should be reached within the year 2005.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2004,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} +{"ID":"gao-suzuki-2003-unsupervised","url":"https:\/\/aclanthology.org\/P03-1066","title":"Unsupervised Learning of Dependency Structure for Language Modeling","abstract":"This paper presents a dependency language model (DLM) that captures linguistic constraints via a dependency structure, i.e., a set of probabilistic dependencies that express the relations between headwords of each phrase in a sentence by an acyclic, planar, undirected graph. Our contributions are threefold. First, we incorporate the dependency structure into an n-gram language model to capture long distance word dependency. Second, we present an unsupervised learning method that discovers the dependency structure of a sentence using a bootstrapping procedure. Finally, we evaluate the proposed models on a realistic application (Japanese Kana-Kanji conversion). Experiments show that the best DLM achieves an 11.3% error rate reduction over the word trigram model.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2003,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} +{"ID":"bach-etal-2022-promptsource","url":"https:\/\/aclanthology.org\/2022.acl-demo.9","title":"PromptSource: An Integrated Development Environment and Repository for Natural Language Prompts","abstract":"PromptSource is a system for creating, sharing, and using natural language prompts. Prompts are functions that map an example from a dataset to a natural language input and target output. Using prompts to train and query language models is an emerging area in NLP that requires new tools that let users develop and refine these prompts collaboratively. PromptSource addresses the emergent challenges in this new setting with (1) a templating language for defining data-linked prompts, (2) an interface that lets users quickly iterate on prompt development by observing outputs of their prompts on many examples, and (3) a community-driven set of guidelines for contributing new prompts to a common pool. Over 2,000 prompts for roughly 170 datasets are already available in PromptSource.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"This research was conducted under the BigScience project for open research, 4 a year-long initiative targeting the study of large models and datasets. The goal of the project is to research language models in a public environment outside large technology companies. The project has over 950 researchers from over 65 countries and more than 250 institutions. The BigScience project was initiated by Thomas Wolf at Hugging Face, and this collaboration would not have been possible without his effort. This research was the focus of the BigScience Prompt Engineering working group, which focused on the role of prompting in large language model training. Disclosure: Stephen Bach contributed to this work as an advisor to Snorkel AI.","year":2022,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} +{"ID":"toh-wang-2014-dlirec","url":"https:\/\/aclanthology.org\/S14-2038","title":"DLIREC: Aspect Term Extraction and Term Polarity Classification System","abstract":"This paper describes our system used in the Aspect Based Sentiment Analysis Task 4 at the SemEval-2014. Our system consists of two components to address two of the subtasks respectively: a Conditional Random Field (CRF) based classifier for Aspect Term Extraction (ATE) and a linear classifier for Aspect Term Polarity Classification (ATP). For the ATE subtask, we implement a variety of lexicon, syntactic and semantic features, as well as cluster features induced from unlabeled data. Our system achieves state-of-the-art performances in ATE, ranking 1st (among 28 submissions) and 2rd (among 27 submissions) for the restaurant and laptop domain respectively. This work is licensed under a Creative Commons Attribution 4.0 International Licence.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"This research work is supported by a research project under Baidu-I 2 R Research Centre.","year":2014,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} +{"ID":"pustejovsky-etal-2019-modeling","url":"https:\/\/aclanthology.org\/W19-3303","title":"Modeling Quantification and Scope in Abstract Meaning Representations","abstract":"In this paper, we propose an extension to Abstract Meaning Representations (AMRs) to encode scope information of quantifiers and negation, in a way that overcomes the semantic gaps of the schema while maintaining its cognitive simplicity. Specifically, we address three phenomena not previously part of the AMR specification: quantification, negation (generally), and modality. The resulting representation, which we call \"Uniform Meaning Representation\" (UMR), adopts the predicative core of AMR and embeds it under a \"scope\" graph when appropriate. UMR representations differ from other treatments of quantification and modal scope phenomena in two ways: (a) they are more transparent; and (b) they specify default scope when possible.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"We would like to thank the anonymous reviewers for their helpful comments. This work is supported by the IIS Division of National Science Foundation via Award No. 1763926 entitled \"Building a Uniform Meaning Representation for Natural Language Processing\". All views expressed in this paper are those of the authors and do not necessarily represent the view of the National Science Foundation.","year":2019,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} +{"ID":"santus-etal-2014-chasing","url":"https:\/\/aclanthology.org\/E14-4008","title":"Chasing Hypernyms in Vector Spaces with Entropy","abstract":"In this paper, we introduce SLQS, a new entropy-based measure for the unsupervised identification of hypernymy and its directionality in Distributional Semantic Models (DSMs). SLQS is assessed through two tasks: (i.) identifying the hypernym in hyponym-hypernym pairs, and (ii.) discriminating hypernymy among various semantic relations. In both tasks, SLQS outperforms other state-of-the-art measures.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2014,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} +{"ID":"schlangen-2021-targeting","url":"https:\/\/aclanthology.org\/2021.acl-short.85","title":"Targeting the Benchmark: On Methodology in Current Natural Language Processing Research","abstract":"It has become a common pattern in our field: One group introduces a language task, exemplified by a dataset, which they argue is challenging enough to serve as a benchmark. They also provide a baseline model for it, which then soon is improved upon by other groups. Often, research efforts then move on, and the pattern repeats itself. What is typically left implicit is the argumentation for why this constitutes progress, and progress towards what. In this paper, I try to step back for a moment from this pattern and work out possible argumentations and their parts.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2021,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} +{"ID":"frermann-etal-2014-hierarchical","url":"https:\/\/aclanthology.org\/E14-1006","title":"A Hierarchical Bayesian Model for Unsupervised Induction of Script Knowledge","abstract":"Scripts representing common sense knowledge about stereotyped sequences of events have been shown to be a valuable resource for NLP applications. We present a hierarchical Bayesian model for unsupervised learning of script knowledge from crowdsourced descriptions of human activities. Events and constraints on event ordering are induced jointly in one unified framework. We use a statistical model over permutations which captures event ordering constraints in a more flexible way than previous approaches. In order to alleviate the sparsity problem caused by using relatively small datasets, we incorporate in our hierarchical model an informed prior on word distributions. The resulting model substantially outperforms a state-of-the-art method on the event ordering task.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"We thank Michaela Regneri for substantial support with the script data, and Mirella Lapata for helpful comments.","year":2014,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} +{"ID":"clinchant-perronnin-2013-aggregating","url":"https:\/\/aclanthology.org\/W13-3212","title":"Aggregating Continuous Word Embeddings for Information Retrieval","abstract":"While words in documents are generally treated as discrete entities, they can be embedded in a Euclidean space which reflects an a priori notion of similarity between them. In such a case, a text document can be viewed as a bag-ofembedded-words (BoEW): a set of realvalued vectors. We propose a novel document representation based on such continuous word embeddings. It consists in non-linearly mapping the wordembeddings in a higher-dimensional space and in aggregating them into a documentlevel representation. We report retrieval and clustering experiments in the case where the word-embeddings are computed from standard topic models showing significant improvements with respect to the original topic models.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2013,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} +{"ID":"fang-etal-2005-web","url":"https:\/\/aclanthology.org\/I05-1087","title":"Web-Based Terminology Translation Mining","abstract":"Mining terminology translation from a large amount of Web data can be applied in many fields such as reading\/writing assistant, machine translation and cross-language information retrieval. How to find more comprehensive results from the Web and obtain the boundary of candidate translations, and how to remove irrelevant noises and rank the remained candidates are the challenging issues. In this paper, after reviewing and analyzing all possible methods of acquiring translations, a feasible statistics-based method is proposed to mine terminology translation from the Web. In the proposed method, on the basis of an analysis of different forms of term translation distributions, character-based string frequency estimation is presented to construct term translation candidates for exploring more translations and their boundaries, and then sort-based subset deletion and mutual information methods are respectively proposed to deal with subset redundancy information and prefix\/suffix redundancy information formed in the process of estimation. Extensive experiments on two test sets of 401 and 3511 English terms validate that our system has better performance.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2005,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} +{"ID":"gildea-etal-2018-acl","url":"https:\/\/aclanthology.org\/W18-2504","title":"The ACL Anthology: Current State and Future Directions","abstract":"The Association of Computational Linguistic's Anthology is the open source archive, and the main source for computational linguistics and natural language processing's scientific literature. The ACL Anthology is currently maintained exclusively by community volunteers and has to be available and up-to-date at all times. We first discuss the current, open source approach used to achieve this, and then discuss how the planned use of Docker images will improve the Anthology's longterm stability. This change will make it easier for researchers to utilize Anthology data for experimentation. We believe the ACL community can directly benefit from the extension-friendly architecture of the Anthology. We end by issuing an open challenge of reviewer matching we encourage the community to rally towards.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2018,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} +{"ID":"shavrina-etal-2020-humans","url":"https:\/\/aclanthology.org\/2020.lrec-1.277","title":"Humans Keep It One Hundred: an Overview of AI Journey","abstract":"Artificial General Intelligence (AGI) is showing growing performance in numerous applications-beating human performance in Chess and Go, using knowledge bases and text sources to answer questions and even pass school student examination. In this paper, we describe the results of AI Journey, a competition of AI-systems aimed to improve AI performance on linguistic knowledge evaluation, reasoning and text generation. Competing systems have passed Unified State Exam (USE, in Russian), including versatile grammar tasks (test and open questions) and an essay: a combined solution consisting of the best performing models have achieved a high score of 69%, with 68% being an average human result. During the competition, a baseline for the task and essay parts was proposed, and 98 systems were submitted, showing different approaches to task solving and reasoning. All the data and solutions can be found on github","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2020,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} +{"ID":"amidei-etal-2018-rethinking","url":"https:\/\/aclanthology.org\/C18-1281","title":"Rethinking the Agreement in Human Evaluation Tasks","abstract":"Human evaluations are broadly thought to be more valuable the higher the inter-annotator agreement. In this paper we examine this idea. We will describe our experiments and analysis within the area of Automatic Question Generation. Our experiments show how annotators diverge in language annotation tasks due to a range of ineliminable factors. For this reason, we believe that annotation schemes for natural language generation tasks that are aimed at evaluating language quality need to be treated with great care. In particular, an unchecked focus on reduction of disagreement among annotators runs the danger of creating generation goals that reward output that is more distant from, rather than closer to, natural human-like language. We conclude the paper by suggesting a new approach to the use of the agreement metrics in natural language generation evaluation tasks.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"We warmly thank Erika Renedo Illarregi, Luisa Ruge, German Ruiz Marcos, Suraj Pandey, Simon Cutajar, Neil Smith and Robin Laney for taking part in the experiments and sharing with us opinions and feedback. We would also thanks Karen Mazidi to give us the login access to her online Question Generator. We finally thanks the anonymous reviewers for their helpful suggestions.","year":2018,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} +{"ID":"guha-etal-2015-removing","url":"https:\/\/aclanthology.org\/N15-1117","title":"Removing the Training Wheels: A Coreference Dataset that Entertains Humans and Challenges Computers","abstract":"Coreference is a core nlp problem. However, newswire data, the primary source of existing coreference data, lack the richness necessary to truly solve coreference. We present a new domain with denser references-quiz bowl questions-that is challenging and enjoyable to humans, and we use the quiz bowl community to develop a new coreference dataset, together with an annotation framework that can tag any text data with coreferences and named entities. We also successfully integrate active learning into this annotation pipeline to collect documents maximally useful to coreference models. State-of-the-art coreference systems underperform a simple classifier on our new dataset, motivating non-newswire data for future coreference research.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"We thank the anonymous reviewers for their insightful comments. We also thank Dr. Hal Daum\u00e9 III and the members of the \"feetthinking\" research group for their advice and assistance. We also thank Dr. Yuening Hu and Mossaab Bagdouri for their help in reviewing the draft of this paper. This work was supported by nsf Grant IIS-1320538. Boyd-Graber is also supported by nsf Grants CCF-1018625 and NCSE-1422492. Any opinions, findings, results, or recommendations expressed here are of the authors and do not necessarily reflect the view of the sponsor.","year":2015,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} +{"ID":"cardon-grabar-2020-reducing","url":"https:\/\/aclanthology.org\/2020.bucc-1.7","title":"Reducing the Search Space for Parallel Sentences in Comparable Corpora","abstract":"This paper describes and evaluates three methods for reducing the research space for parallel sentences in monolingual comparable corpora. Basically, when searching for parallel sentences between two comparable documents, all the possible sentence pairs between the documents have to be considered, which introduces a great degree of imbalance between parallel pairs and non-parallel pairs. This is a problem because, even with a highly performing algorithm, a lot of noise will be present in the extracted results, thus introducing a need for an extensive and costly manual check phase. We propose to study how we can drastically reduce the number of sentence pairs that have to be fed to a classifier so that the results can be manually handled. We work on a manually annotated subset obtained from a French comparable corpus.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"We would like to thank the reviewers for their comments. This work was funded by the French National Agency for Research (ANR) as part of the CLEAR project (Communication, Literacy, Education, Accessibility, Readability), ANR-17-CE19-0016-01.","year":2020,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} +{"ID":"armengol-estape-etal-2021-multilingual","url":"https:\/\/aclanthology.org\/2021.findings-acl.437","title":"Are Multilingual Models the Best Choice for Moderately Under-resourced Languages? A Comprehensive Assessment for Catalan","abstract":"Multilingual language models have been a crucial breakthrough as they considerably reduce the need of data for under-resourced languages. Nevertheless, the superiority of language-specific models has already been proven for languages having access to large amounts of data. In this work, we focus on Catalan with the aim to explore to what extent a medium-sized monolingual language model is competitive with state-of-the-art large multilingual models. For this, we: (1) build a clean, high-quality textual Catalan corpus (CaText), the largest to date (but only a fraction of the usual size of the previous work in monolingual language models), (2) train a Transformerbased language model for Catalan (BERTa), and (3) devise a thorough evaluation in a diversity of settings, comprising a complete array of downstream tasks, namely, Part of Speech Tagging, Named Entity Recognition and Classification, Text Classification, Question Answering, and Semantic Textual Similarity, with most of the corresponding datasets being created ex novo. The result is a new benchmark, the Catalan Language Understanding Benchmark (CLUB), which we publish as an open resource, together with the clean textual corpus, the language model, and the cleaning pipeline. Using state-of-the-art multilingual models and a monolingual model trained only on Wikipedia as baselines, we consistently observe the superiority of our model across tasks and settings.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"This work was partially funded by the Generalitat de Catalunya through the project PDAD14\/20\/00001, the State Secretariat for Digitalization and Artificial Intelligence (SEDIA) within the framework of the Plan TL, 27 the MT4All CEF project, 28 and the Future of Computing Center, a Barcelona Supercomputing Center and IBM initiative (2020).We thank all the reviewers for their valuable comments.","year":2021,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} +{"ID":"hu-etal-2009-contrasting","url":"https:\/\/aclanthology.org\/W09-3953","title":"Contrasting the Interaction Structure of an Email and a Telephone Corpus: A Machine Learning Approach to Annotation of Dialogue Function Units","abstract":"We present a dialogue annotation scheme for both spoken and written interaction, and use it in a telephone transaction corpus and an email corpus. We train classifiers, comparing regular SVM and structured SVM against a heuristic baseline. We provide a novel application of structured SVM to predicting relations between instance pairs.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2009,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} +{"ID":"xing-etal-2020-improving","url":"https:\/\/aclanthology.org\/2020.aacl-main.63","title":"Improving Context Modeling in Neural Topic Segmentation","abstract":"Topic segmentation is critical in key NLP tasks and recent works favor highly effective neural supervised approaches. However, current neural solutions are arguably limited in how they model context. In this paper, we enhance a segmenter based on a hierarchical attention BiLSTM network to better model context, by adding a coherence-related auxiliary task and restricted self-attention. Our optimized segmenter 1 outperforms SOTA approaches when trained and tested on three datasets. We also the robustness of our proposed model in domain transfer setting by training a model on a large-scale dataset and testing it on four challenging real-world benchmarks. Furthermore, we apply our proposed strategy to two other languages (German and Chinese), and show its effectiveness in multilingual scenarios.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"We thank the anonymous reviewers and the UBC-NLP group for their insightful comments.","year":2020,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} +{"ID":"stenchikova-etal-2007-ravencalendar","url":"https:\/\/aclanthology.org\/N07-4008","title":"RavenCalendar: A Multimodal Dialog System for Managing a Personal Calendar","abstract":"Dialog applications for managing calendars have been developed for every generation of dialog systems research (Heidorn, 1978; Yankelovich, 1994; Constantinides and others, 1998; Horvitz and Paek, 2000; Vo and Wood, 1996; Huang and others, 2001 ). Today, Web-based calendar applications are widely used. A spoken dialog interface to a Web-based calendar application permits convenient use of the system on a handheld device or over the telephone.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2007,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} +{"ID":"hiraoka-etal-2019-stochastic","url":"https:\/\/aclanthology.org\/P19-1158","title":"Stochastic Tokenization with a Language Model for Neural Text Classification","abstract":"For unsegmented languages such as Japanese and Chinese, tokenization of a sentence has a significant impact on the performance of text classification. Sentences are usually segmented with words or subwords by a morphological analyzer or byte pair encoding and then encoded with word (or subword) representations for neural networks. However, segmentation is potentially ambiguous, and it is unclear whether the segmented tokens achieve the best performance for the target task. In this paper, we propose a method to simultaneously learn tokenization and text classification to address these problems. Our model incorporates a language model for unsupervised tokenization into a text classifier and then trains both models simultaneously. To make the model robust against infrequent tokens, we sampled segmentation for each sentence stochastically during training, which resulted in improved performance of text classification. We conducted experiments on sentiment analysis as a text classification task and show that our method achieves better performance than previous methods.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"We are grateful to the members of the Computational Linguistics Laboratory, NAIST and the anonymous reviewers for their insightful comments.","year":2019,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} +{"ID":"wolf-sonkin-etal-2019-latin","url":"https:\/\/aclanthology.org\/W19-3114","title":"Latin script keyboards for South Asian languages with finite-state normalization","abstract":"The use of the Latin script for text entry of South Asian languages is common, even though there is no standard orthography for these languages in the script. We explore several compact finite-state architectures that permit variable spellings of words during mobile text entry. We find that approaches making use of transliteration transducers provide large accuracy improvements over baselines, but that simpler approaches involving a compact representation of many attested alternatives yields much of the accuracy gain. This is particularly important when operating under constraints on model size (e.g., on inexpensive mobile devices with limited storage and memory for keyboard models), and on speed of inference, since people typing on mobile keyboards expect no perceptual delay in keyboard responsiveness.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2019,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} +{"ID":"mcdonald-1998-target","url":"https:\/\/aclanthology.org\/C98-2243","title":"Target Word Selection as Proximity in Semantic Space","abstract":"Lexical selection is a significant problem for widecoverage machine translation: depending on the context, a given source language word can often be translated into different target language words. In this paper I propose a method for target word selection that assumes the appropriate translation is more similar to the translated context than are the alternatives. Similarity of a word to a context is estimated using a proximity measure in corpusderived \"semantic space\". The method is evaluated using an English-Spanish parallel corpus of colloquial dialogue.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"This work was supported by awards from NSERC Canada and the ORS scheme, and in part by ESRC grant #R000237419. Thanks to Chris Brew and Mirella Lapata for valuable comments.","year":1998,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} +{"ID":"amigo-etal-2005-qarla","url":"https:\/\/aclanthology.org\/P05-1035","title":"QARLA: A Framework for the Evaluation of Text Summarization Systems","abstract":"This paper presents a probabilistic framework, QARLA, for the evaluation of text summarisation systems. The input of the framework is a set of manual (reference) summaries, a set of baseline (automatic) summaries and a set of similarity metrics between summaries. It provides i) a measure to evaluate the quality of any set of similarity metrics, ii) a measure to evaluate the quality of a summary using an optimal set of similarity metrics, and iii) a measure to evaluate whether the set of baseline summaries is reliable or may produce biased results. Compared to previous approaches, our framework is able to combine different metrics and evaluate the quality of a set of metrics without any a-priori weighting of their relative importance. We provide quantitative evidence about the effectiveness of the approach to improve the automatic evaluation of text summarisation systems by combining several similarity metrics.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"We are indebted to Ed Hovy, Donna Harman, Paul Over, Hoa Dang and Chin-Yew Lin for their inspiring and generous feedback at different stages in the development of QARLA. We are also indebted to NIST for hosting Enrique Amig\u00f3 as a visitor and for providing the DUC test beds. This work has been partially supported by the Spanish government, project R2D2 (TIC-2003-7180).","year":2005,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} +{"ID":"yamabana-etal-2000-lexicalized","url":"https:\/\/aclanthology.org\/C00-2134","title":"Lexicalized Tree Automata-based Grammars for Translating Conversational Texts","abstract":null,"label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2000,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} +{"ID":"saetre-etal-2008-connecting","url":"http:\/\/www.lrec-conf.org\/proceedings\/lrec2008\/pdf\/442_paper.pdf","title":"Connecting Text Mining and Pathways using the PathText Resource","abstract":"Many systems have been developed in the past few years to assist researchers in the discovery of knowledge published as English text, for example in the PubMed database. At the same time, higher level collective knowledge is often published using a graphical notation representing all the entities in a pathway and their interactions. We believe that these pathway visualizations could serve as an effective user interface for knowledge discovery if they can be linked to the text in publications. Since the graphical elements in a Pathway are of a very different nature than their corresponding descriptions in English text, we developed a prototype system called PathText. The goal of PathText is to serve as a bridge between these two different representations. In this paper, we first describe the overall architecture and the interfaces of the PathText system, and then provide some details about the core Text Mining components.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"This work was partially supported by \"Grant-in-Aid for Specially Promoted Research\" and the \"Genome Network Project\", both from the Ministry of Education, Culture, Sports, Science and Technology (MEXT), Japan. This work was also sponsored by Okinawa Institute of Science and Technology (OIST), Systems Biology Institute (SBI) and Sony Computer Science Laboratories, Inc. COLING-ACL 2006, pages 1017-1024, Sydney, Australia, July. Goran Nenadic, Naoki Okazaki, and Sophia Ananiadou. 2006. Towards a terminological resource for biomedical text mining. In Proceedings of LREC-5, Genoa, Italy, May. Chikashi Nobata, Philip Cotter, Naoaki Okazaki, Brian Rea, Yutaka Sasaki, Yoshimasa Tsuruoka, Jun'ichi Tsujii, and Sophia Ananiadou. 2008. Kleio: a knowledgeenriched information retrieval system for biology. In Proceedings of the ACM SIGIR Conference, July. Yoshimasa Tsuruoka and Jun'ichi Tsujii. 2004. Improving the performance of dictionary-based approaches in pro-","year":2008,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} +{"ID":"van-den-bosch-etal-2006-transferring","url":"http:\/\/www.lrec-conf.org\/proceedings\/lrec2006\/pdf\/167_pdf.pdf","title":"Transferring PoS-tagging and lemmatization tools from spoken to written Dutch corpus development","abstract":"We describe a case study in the reuse and transfer of tools in language resource development, from a corpus of spoken Dutch to a corpus of written Dutch. Once tools for a particular language have been developed, it is logical, but not trivial to reuse them for other types or registers of the language than the tools were originally designed for. This paper reviews the decisions and adaptations necessary to make this particular transfer from spoken to written language, focusing on a part-of-speech tagger and a lemmatizer. While the lemmatizer can be transferred fairly straightforwardly, the tagger needs to be adaptated considerably. We show how it can be adapted without starting from scratch. We describe how the part-of-speech tagset was adapted and how the tagger was retrained to deal with written-text phenomena it had not been trained on earlier.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"This research is funded by STEVIN, a Dutch Language Union (Taalunie) programme 5 ), as part of the D-Coi (Dutch","year":2006,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} +{"ID":"matteson-etal-2018-rich","url":"https:\/\/aclanthology.org\/C18-1210","title":"Rich Character-Level Information for Korean Morphological Analysis and Part-of-Speech Tagging","abstract":"Due to the fact that Korean is a highly agglutinative, character-rich language, previous work on Korean morphological analysis typically employs the use of sub-character features known as graphemes or otherwise utilizes comprehensive prior linguistic knowledge (i.e., a dictionary of known morphological transformation forms, or actions). These models have been created with the assumption that character-level, dictionary-less morphological analysis was intractable due to the number of actions required. We present, in this study, a multi-stage action-based model that can perform morphological transformation and part-of-speech tagging using arbitrary units of input and apply it to the case of character-level Korean morphological analysis. Among models that do not employ prior linguistic knowledge, we achieve state-of-the-art word and sentence-level tagging accuracy with the Sejong Korean corpus using our proposed data-driven Bi-LSTM model.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"This research was supported by the MSIT (Ministry of Science and ICT), South Korea, under the ITRC (Information Technology Research Center) support program (\"Research and Development of Human-Inspired Multiple Intelligence\") supervised by the IITP (Institute for Information & Communications Technology Promotion). Additionally, this work was supported by the National Research Foundation of Korea (NRF) grant funded by the South Korean government (MSIP) (No. NRF-2016R1A2B2015912).","year":2018,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} +{"ID":"rothe-etal-2020-leveraging","url":"https:\/\/aclanthology.org\/2020.tacl-1.18","title":"Leveraging Pre-trained Checkpoints for Sequence Generation Tasks","abstract":"Unsupervised pre-training of large neural models has recently revolutionized Natural Language Processing. By warm-starting from the publicly released checkpoints, NLP practitioners have pushed the state-of-the-art on multiple benchmarks while saving significant amounts of compute time. So far the focus has been mainly on the Natural Language Understanding tasks. In this paper, we demonstrate the efficacy of pre-trained checkpoints for Sequence Generation. We developed a Transformer-based sequence-to-sequence model that is compatible with publicly available pre-trained BERT, GPT-2, and RoBERTa checkpoints and conducted an extensive empirical study on the utility of initializing our model, both encoder and decoder, with these checkpoints. Our models result in new state-of-the-art results on Machine","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"We thank the reviewers and the action editor for their feedback. We would like to thank Ryan McDonald, Joshua Maynez, and Bernd Bohnet for useful discussions.","year":2020,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} +{"ID":"sundheim-1996-overview","url":"https:\/\/aclanthology.org\/X96-1048","title":"Overview of Results of the MUC-6 Evaluation","abstract":"The latest in a series of natural language processing system evaluations was concluded in October 1995 and was the topic of the Sixth Message Understanding Conference (MUC-6) in November. Participants were invited to enter their systems in as many as four different task-oriented evaluations. The Named Entity and Coreference tasks entailed\nStandard Generalized Markup Language (SGML) annotation of texts and were being conducted for the first time. The other two tasks, Template Element and Scenario Template, were information extraction tasks that followed on from the MUC evaluations conducted in previous years. The evolution and design of the MUC-6 evaluation are discussed in the paper by Grishman and Sundheim in this volume.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"The definition and implementation of the evaluations reported on at the Message Understanding Conference was once again a \"community\" effort, requiring active involvement on the part of the evaluation participants as well as","year":1996,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} +{"ID":"mendes-etal-2012-evaluating","url":"http:\/\/www.lrec-conf.org\/proceedings\/lrec2012\/pdf\/545_Paper.pdf","title":"Evaluating the Impact of Phrase Recognition on Concept Tagging","abstract":"We have developed DBpedia Spotlight, a flexible concept tagging system that is able to tag-i.e. annotate-entities, topics and other terms in natural language text. The system starts by recognizing phrases to annotate in the input text, and subsequently disambiguates them to a reference knowledge base extracted from Wikipedia. In this paper we evaluate the impact of the phrase recognition step on the ability of the system to correctly reproduce the annotations of a gold standard in an unsupervised setting. We argue that a combination of techniques is needed, and we evaluate a number of alternatives according to an existing evaluation set.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"The authors would like to thank Milo\u0161 Stanojevi\u0107 for the discussions that lead to the idea of applying Bloom filters in the NP L* implementation. This work was partially funded by the European Commission through the FP7 grant LOD2 -Creating Knowledge out of Interlinked Data (Grant No. 257943).","year":2012,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} +{"ID":"papadopoulou-2013-gf","url":"https:\/\/aclanthology.org\/R13-2019","title":"GF Modern Greek Resource Grammar","abstract":"The paper describes the Modern Greek (MG) Grammar, implemented in Grammatical Framework (GF) as part of the Grammatical Framework Resource Grammar Library (RGL). GF is a special-purpose language for multilingual grammar applications. The RGL is a reusable library for dealing with the morphology and syntax of a growing number of natural languages. It is based on the use of an abstract syntax, which is common for all languages, and different concrete syntaxes implemented in GF. Both GF itself and the RGL are open-source. RGL currently covers more than 30 languages. MG is the 35th language that is available in the RGL. For the purpose of the implementation, a morphologydriven approach was used, meaning a bottomup method, starting from the formation of words before moving to larger units (sentences). We discuss briefly the main characteristics and grammatical features of MG, and present some of the major difficulties we encountered during the process of implementation and how these are handled in the MG grammar.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2013,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} +{"ID":"vylomova-etal-2016-take","url":"https:\/\/aclanthology.org\/P16-1158","title":"Take and Took, Gaggle and Goose, Book and Read: Evaluating the Utility of Vector Differences for Lexical Relation Learning","abstract":"Recent work has shown that simple vector subtraction over word embeddings is surprisingly effective at capturing different lexical relations, despite lacking explicit supervision. Prior work has evaluated this intriguing result using a word analogy prediction formulation and hand-selected relations, but the generality of the finding over a broader range of lexical relation types and different learning settings has not been evaluated. In this paper, we carry out such an evaluation in two learning settings: (1) spectral clustering to induce word relations, and (2) supervised learning to classify vector differences into relation types. We find that word embeddings capture a surprising amount of information, and that, under suitable supervised training, vector subtraction generalises well to a broad range of relations, including over unseen lexical items.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"LR was supported by EPSRC grant EP\/I037512\/1 and ERC Starting Grant DisCoTex (306920). TC and TB were supported by the Australian Research Council.","year":2016,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} +{"ID":"nagasawa-etal-2021-validity","url":"https:\/\/aclanthology.org\/2021.maiworkshop-1.6","title":"Validity-Based Sampling and Smoothing Methods for Multiple Reference Image Captioning","abstract":"In image captioning, multiple captions are often provided as ground truths, since a valid caption is not always uniquely determined. Conventional methods randomly select a single caption and treat it as correct, but there have been few effective training methods that utilize multiple given captions. In this paper, we propose two training techniques for making effective use of multiple reference captions: 1) validity-based caption sampling (VBCS), which prioritizes the use of captions that are estimated to be highly valid during training, and 2) weighted caption smoothing (WCS), which applies smoothing only to the relevant words the reference caption to reflect multiple reference captions simultaneously. Experiments show that our proposed methods improve CIDEr by 2.6 points and BLEU4 by 0.9 points from baseline on the MSCOCO dataset.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2021,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} +{"ID":"roberson-2019-automatic","url":"https:\/\/aclanthology.org\/W19-3623","title":"Automatic Product Categorization for Official Statistics","abstract":null,"label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2019,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} +{"ID":"hafner-1985-semantics","url":"https:\/\/aclanthology.org\/P85-1001","title":"Semantics of Temporal Queries and Temporal Data","abstract":"This paper analyzes the requirements for adding a temporal reasoning component to a natural language database query system, and proposes a computational model that satisfies those requirements. A prelimInary implementation in Prolog is used to generate examples of the model's capabi Iltles.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":1985,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} +{"ID":"ljubesic-etal-2015-predicting","url":"https:\/\/aclanthology.org\/R15-1049","title":"Predicting the Level of Text Standardness in User-generated Content","abstract":"Non-standard language as it appears in user-generated content has recently attracted much attention. This paper proposes that non-standardness comes in two basic varieties, technical and linguistic, and develops a machine-learning method to discriminate between standard and nonstandard texts in these two dimensions. We describe the manual annotation of a dataset of Slovene user-generated content and the features used to build our regression models. We evaluate and discuss the results, where the mean absolute error of the best performing method on a three-point scale is 0.38 for technical and 0.42 for linguistic standardness prediction. Even when using no language-dependent information sources, our predictor still outperforms an OOVratio baseline by a wide margin. In addition, we show that very little manually annotated training data is required to perform good prediction. Predicting standardness can help decide when to attempt to normalise the data to achieve better annotation results with standard tools, and provide linguists who are interested in nonstandard language with a simple way of selecting only such texts for their research.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"The work described in this paper was funded by the Slovenian Research Agency, project J6-6842 and by the European Fund for Regional Development 2007 -2013 .","year":2015,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} +{"ID":"reiter-2019-natural","url":"https:\/\/aclanthology.org\/W19-8402","title":"Natural Language Generation Challenges for Explainable AI","abstract":"Good quality explanations of artificial intelligence (XAI) reasoning must be written (and evaluated) for an explanatory purpose, targeted towards their readers, have a good narrative and causal structure, and highlight where uncertainty and data quality affect the AI output. I discuss these challenges from a Natural Language Generation (NLG) perspective, and highlight four specific \"NLG for XAI\" research challenges.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"This paper started off as a (much shorter) blog https:\/\/ehudreiter.com\/2019\/07\/ 19\/nlg-and-explainable-ai\/. My thanks to the people who commented on this blog, as well as the anonymous reviewers, the members of the Aberdeen CLAN research group, the members of the Explaining the Outcomes of Complex Models project at Monash, and the members of the NL4XAI research project, all of whom gave me excellent feedback and suggestions. My thanks also to Prof Ren\u00e9 van der Wal for his help in the experiment mentioned in section 3.","year":2019,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} +{"ID":"fernandez-gonzalez-gomez-rodriguez-2018-dynamic-oracle","url":"https:\/\/aclanthology.org\/N18-2062","title":"A Dynamic Oracle for Linear-Time 2-Planar Dependency Parsing","abstract":"We propose an efficient dynamic oracle for training the 2-Planar transition-based parser, a linear-time parser with over 99% coverage on non-projective syntactic corpora. This novel approach outperforms the static training strategy in the vast majority of languages tested and scored better on most datasets than the arc-hybrid parser enhanced with the Swap transition, which can handle unrestricted nonprojectivity.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"This work has received funding from the European Research Council (ERC), under the European Union's Horizon 2020 research and innovation programme (FASTPARSE, grant agreement No 714150), from the TELEPARES-UDC project (FFI2014-51978-C2-2-R) and the ANSWER-ASAP project (TIN2017-85160-C2-1-R) from MINECO, and from Xunta de Galicia (ED431B 2017\/01).","year":2018,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} +{"ID":"forsbom-2009-extending","url":"https:\/\/aclanthology.org\/W09-4607","title":"Extending the View: Explorations in Bootstrapping a Swedish PoS Tagger","abstract":"State-of-the-art statistical part-of-speech taggers mainly use information on tag bi-or trigrams, depending on the size of the training corpus. Some also use lexical emission probabilities above unigrams with beneficial results. In both cases, a wider context usually gives better accuracy for a large training corpus, which in turn gives better accuracy than a smaller one. Large corpora with validated tags, however, are scarce, so a bootstrap technique can be used. As the corpus grows, it is probable that a widened context would improve results even further. In this paper, we looked at the contribution to accuracy of such an extended view for both tag transitions and lexical emissions, applied to both a validated Swedish source corpus and a raw bootstrap corpus. We found that the extended view was more important for tag transitions, in particular if applied to the bootstrap corpus. For lexical emission, it was also more important if applied to the bootstrap corpus than to the source corpus, although it was beneficial for both. The overall best tagger had an accuracy of 98.05%.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"We would like to thank Anna S\u00e5gvall Hein and the anonymous reviewers for valuable comments, Eva Forsbom","year":2009,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} +{"ID":"portisch-etal-2020-kgvec2go","url":"https:\/\/aclanthology.org\/2020.lrec-1.692","title":"KGvec2go -- Knowledge Graph Embeddings as a Service","abstract":"In this paper, we present KGvec2go, a Web API for accessing and consuming graph embeddings in a lightweight fashion in downstream applications. Currently, we serve pre-trained embeddings for four knowledge graphs. We introduce the service and its usage, and we show further that the trained models have semantic value by evaluating them on multiple semantic benchmarks. The evaluation also reveals that the combination of multiple models can lead to a better outcome than the best individual model.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2020,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} +{"ID":"vyas-pantel-2009-semi","url":"https:\/\/aclanthology.org\/N09-1033","title":"Semi-Automatic Entity Set Refinement","abstract":"State of the art set expansion algorithms produce varying quality expansions for different entity types. Even for the highest quality expansions, errors still occur and manual refinements are necessary for most practical uses. In this paper, we propose algorithms to aide this refinement process, greatly reducing the amount of manual labor required. The methods rely on the fact that most expansion errors are systematic, often stemming from the fact that some seed elements are ambiguous. Using our methods, empirical evidence shows that average R-precision over random entity sets improves by 26% to 51% when given from 5 to 10 manually tagged errors. Both proposed refinement models have linear time complexity in set size allowing for practical online use in set expansion systems.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2009,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} +{"ID":"bhat-etal-2020-towards","url":"https:\/\/aclanthology.org\/2020.emnlp-main.675","title":"Towards Modeling Revision Requirements in wikiHow Instructions","abstract":"wikiHow is a resource of how-to guides that describe the steps necessary to accomplish a goal. Guides in this resource are regularly edited by a community of users, who try to improve instructions in terms of style, clarity and correctness. In this work, we test whether the need for such edits can be predicted automatically. For this task, we extend an existing resource of textual edits with a complementary set of approx. 4 million sentences that remain unedited over time and report on the outcome of two revision modeling experiments.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"The research presented in this paper was funded by the DFG Emmy Noether program (RO 4848\/2-1).","year":2020,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} +{"ID":"falis-etal-2019-ontological","url":"https:\/\/aclanthology.org\/D19-6220","title":"Ontological attention ensembles for capturing semantic concepts in ICD code prediction from clinical text","abstract":"We present a semantically interpretable system for automated ICD coding of clinical text documents. Our contribution is an ontological attention mechanism which matches the structure of the ICD ontology, in which shared attention vectors are learned at each level of the hierarchy, and combined into label-dependent ensembles. Analysis of the attention heads shows that shared concepts are learned by the lowest common denominator node. This allows child nodes to focus on the differentiating concepts, leading to efficient learning and memory usage. Visualisation of the multilevel attention on the original text allows explanation of the code predictions according to the semantics of the ICD ontology. On the MIMIC-III dataset we achieve a 2.7% absolute (11% relative) improvement from 0.218 to 0.245 macro-F1 score compared to the previous state of the art across 3,912 codes. Finally, we analyse the labelling inconsistencies arising from different coding practices which limit performance on this task.","label_nlp4sg":1,"task":[],"method":[],"goal1":"Good Health and Well-Being","goal2":null,"goal3":null,"acknowledgments":null,"year":2019,"sdg1":0,"sdg2":0,"sdg3":1,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} +{"ID":"shaikh-etal-2008-linguistic","url":"https:\/\/aclanthology.org\/I08-2128","title":"Linguistic Interpretation of Emotions for Affect Sensing from Text","abstract":"Several approaches have already been employed to \"sense\" affective information from text, but none of those ever considered the cognitive and appraisal structure of individual emotions. Hence this paper aims at interpreting the cognitive theory of emotions known as the OCC emotion model, from a linguistic standpoint. The paper provides rules for the OCC emotion types for the task of sensing affective information from text. Since the OCC emotions are associated with several cognitive variables, we explain how the values could be assigned to those by analyzing and processing natural language components. Empirical results indicate that our system outperforms another state-of-the-art system.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2008,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} +{"ID":"elgohary-carpuat-2016-learning","url":"https:\/\/aclanthology.org\/P16-2059","title":"Learning Monolingual Compositional Representations via Bilingual Supervision","abstract":"Bilingual models that capture the semantics of sentences are typically only evaluated on cross-lingual transfer tasks such as cross-lingual document categorization or machine translation. In this work, we evaluate the quality of the monolingual representations learned with a variant of the bilingual compositional model of Hermann and Blunsom (2014), when viewing translations in a second language as a semantic annotation as the original language text. We show that compositional objectives based on phrase translation pairs outperform compositional objectives based on bilingual sentences and on monolingual paraphrases.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2016,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} +{"ID":"fukui-etal-2017-spectral","url":"https:\/\/aclanthology.org\/W17-2405","title":"Spectral Graph-Based Method of Multimodal Word Embedding","abstract":"In this paper, we propose a novel method for multimodal word embedding, which exploit a generalized framework of multiview spectral graph embedding to take into account visual appearances or scenes denoted by words in a corpus. We evaluated our method through word similarity tasks and a concept-to-image search task, having found that it provides word representations that reflect visual information, while somewhat trading-off the performance on the word similarity tasks. Moreover, we demonstrate that our method captures multimodal linguistic regularities, which enable recovering relational similarities between words and images by vector arithmetic.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2017,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} +{"ID":"kaplan-etal-2002-adapting","url":"https:\/\/aclanthology.org\/W02-1506","title":"Adapting Existing Grammars: The XLE Experience","abstract":"We report on the XLE parser and grammar development platform (Maxwell and Kaplan, 1993) and describe how a basic Lexical Functional Grammar for English has been adapted to two different corpora (newspaper text and copier repair tips).","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2002,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} +{"ID":"saetre-etal-2009-protein","url":"https:\/\/aclanthology.org\/W09-1414","title":"From Protein-Protein Interaction to Molecular Event Extraction","abstract":"This document describes the methods and results for our participation in the BioNLP'09 Shared Task #1 on Event Extraction. It also contains some error analysis and a brief discussion of the results. Previous shared tasks in the BioNLP community have focused on extracting gene and protein names, and on finding (direct) protein-protein interactions (PPI). This year's task was slightly different, since the protein names were already manually annotated in the text. The new challenge was to extract biological events involving these given gene and gene products. We modified a publicly available system (AkanePPI) to apply it to this new, but similar, protein interaction task. AkanePPI has previously achieved state-of-the-art performance on all existing public PPI corpora, and only small changes were needed to achieve competitive results on this event extraction task. Our official result was an F-score of 36.9%, which was ranked as number six among submissions from 24 different groups. We later balanced the recall\/precision by including more predictions than just the most confident one in ambiguous cases, and this raised the F-score on the test-set to 42.6%. The new Akane program can be used freely for academic purposes.","label_nlp4sg":1,"task":[],"method":[],"goal1":"Good Health and Well-Being","goal2":null,"goal3":null,"acknowledgments":"\"Grant-in-Aid for Specially Promoted Research\" and \"Genome Network Project\", MEXT, Japan.","year":2009,"sdg1":0,"sdg2":0,"sdg3":1,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} +{"ID":"hromada-2013-random","url":"https:\/\/aclanthology.org\/R13-2012","title":"Random Projection and Geometrization of String Distance Metrics","abstract":"Edit distance is not the only approach how distance between two character sequences can be calculated. Strings can be also compared in somewhat subtler geometric ways. A procedure inspired by Random Indexing can attribute an D-dimensional geometric coordinate to any character N-gram present in the corpus and can subsequently represent the word as a sum of N-gram fragments which the string contains. Thus, any word can be described as a point in a dense N-dimensional space and the calculation of their distance can be realized by applying traditional Euclidean measures. Strong correlation exists, within the Keats Hyperion corpus, between such cosine measure and Levenshtein distance. Overlaps between the centroid of Levenshtein distance matrix space and centroids of vectors spaces generated by Random Projection were also observed. Contrary to standard non-random \"sparse\" method of measuring cosine distances between two strings, the method based on Random Projection tends to naturally promote not the shortest but rather longer strings. The geometric approach yields finer output range than Levenshtein distance and the retrieval of the nearest neighbor of text's centroid could have, due to limited dimensionality of Randomly Projected space, smaller complexity than other vector methods.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"The author would like to thank Adil El-Ghali for introduction into Random Indexing as well as his comments concerning the present paper; to prof. Charles Tijus and doc. Ivan Sekaj for their support and to Aliancia Fair-Play for permission to execute some code on their servers.","year":2013,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} +{"ID":"vijayaraghavan-etal-2020-dapper","url":"https:\/\/aclanthology.org\/2020.aacl-main.65","title":"DAPPER: Learning Domain-Adapted Persona Representation Using Pretrained BERT and External Memory","abstract":"Research in building intelligent agents have emphasized the need for understanding characteristic behavior of people. In order to reflect human-like behavior, agents require the capability to comprehend the context, infer individualized persona patterns and incrementally learn from experience. In this paper, we present a model called DAPPER that can learn to embed persona from natural language and alleviate task or domain-specific data sparsity issues related to personas. To this end, we implement a text encoding strategy that leverages a pretrained language model and an external memory to produce domain-adapted persona representations. Further, we evaluate the transferability of these embeddings by simulating low-resource scenarios. Our comparative study demonstrates the capability of our method over other approaches towards learning rich transferable persona embeddings. Empirical evidence suggests that the learnt persona embeddings can be effective in downstream tasks like hate speech detection.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2020,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} +{"ID":"setiawan-etal-2009-topological","url":"https:\/\/aclanthology.org\/P09-1037","title":"Topological Ordering of Function Words in Hierarchical Phrase-based Translation","abstract":"Hierarchical phrase-based models are attractive because they provide a consistent framework within which to characterize both local and long-distance reorderings, but they also make it dif cult to distinguish many implausible reorderings from those that are linguistically plausible. Rather than appealing to annotationdriven syntactic modeling, we address this problem by observing the in uential role of function words in determining syntactic structure, and introducing soft constraints on function word relationships as part of a standard log-linear hierarchical phrase-based model. Experimentation on Chinese-English and Arabic-English translation demonstrates that the approach yields signi cant gains in performance.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"This research was supported in part by the ","year":2009,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} +{"ID":"wang-matthews-2008-species","url":"https:\/\/aclanthology.org\/W08-0610","title":"Species Disambiguation for Biomedical Term Identification","abstract":"An important task in information extraction (IE) from biomedical articles is term identification (TI), which concerns linking entity mentions (e.g., terms denoting proteins) in text to unambiguous identifiers in standard databases (e.g., RefSeq). Previous work on TI has focused on species-specific documents. However, biomedical documents, especially full-length articles, often talk about entities across a number of species, in which case resolving species ambiguity becomes an indispensable part of TI. This paper describes our rule-based and machine-learning based approaches to species disambiguation and demonstrates that performance of TI can be improved by over 20% if the correct species are known. We also show that using the species predicted by the automatic species taggers can improve TI by a large margin.","label_nlp4sg":1,"task":[],"method":[],"goal1":"Good Health and Well-Being","goal2":null,"goal3":null,"acknowledgments":" 17 We tested the TI system on the four original BioCreAtIvE GN datasets separately and the averaged performance was about the median among the participating systems in the workshops. We did not optimise the TXM TI system on BioCreAtIvE, as our point here is to measure the TI performance with or without help from the automatic predicted species.","year":2008,"sdg1":0,"sdg2":0,"sdg3":1,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} +{"ID":"martinovic-1994-universal","url":"https:\/\/aclanthology.org\/C94-2148","title":"Universal Guides and Finiteness and Symmetry of Grammar Processing Algorithms","abstract":"This paper presents a novel technique called \"universal guides\" which explores inherent properties of logic grammars (changing variable binding status) in order to characterize tbrmal criteria for termination in a derivation process. The notion of universal guides also offers a new framework in which both parsing and generation can be viewed merely as two different instances of the same generic process: guide consumption. This technique generalizes and exemplifies a new and original use of an existing concept of \"proper guides\" recently proposed in literature for controlling top-down left-to-right (TDLR) execution in logic progrmns. We show that universal guides are independent of a particular grammar evaluation strategy. Also, unlike proper guides they can be specified in the same mmmer for any given algorithm without knowing in advance whether the algorithm is a parsing or a generation algorithm. Their introduction into a grammar prevents as well the occurrence of certain grammar rules an infinite number of times dnring a derivation process.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":1994,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} +{"ID":"suzuki-etal-2002-topic","url":"https:\/\/aclanthology.org\/C02-2012","title":"Topic Tracking using Subject Templates and Clustering Positive Training Instances","abstract":null,"label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2002,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} +{"ID":"stoyanchev-etal-2008-exact","url":"https:\/\/aclanthology.org\/W08-1802","title":"Exact Phrases in Information Retrieval for Question Answering","abstract":"Question answering (QA) is the task of finding a concise answer to a natural language question. The first stage of QA involves information retrieval. Therefore, performance of an information retrieval subsystem serves as an upper bound for the performance of a QA system. In this work we use phrases automatically identified from questions as exact match constituents to search queries. Our results show an improvement over baseline on several document and sentence retrieval measures on the WEB dataset. We get a 20% relative improvement in MRR for sentence extraction on the WEB dataset when using automatically generated phrases and a further 9.5% relative improvement when using manually annotated phrases. Surprisingly, a separate experiment on the indexed AQUAINT dataset showed no effect on IR performance of using exact phrases.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"We would like to thank professor Amanda Stent for suggestions about experiments and proofreading the paper. We would like to thank the reviewers for useful comments.","year":2008,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} +{"ID":"kiritchenko-mohammad-2016-capturing","url":"https:\/\/aclanthology.org\/N16-1095","title":"Capturing Reliable Fine-Grained Sentiment Associations by Crowdsourcing and Best--Worst Scaling","abstract":"Access to word-sentiment associations is useful for many applications, including sentiment analysis, stance detection, and linguistic analysis. However, manually assigning finegrained sentiment association scores to words has many challenges with respect to keeping annotations consistent. We apply the annotation technique of Best-Worst Scaling to obtain real-valued sentiment association scores for words and phrases in three different domains: general English, English Twitter, and Arabic Twitter. We show that on all three domains the ranking of words by sentiment remains remarkably consistent even when the annotation process is repeated with a different set of annotators. We also, for the first time, determine the minimum difference in sentiment association that is perceptible to native speakers of a language.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2016,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} +{"ID":"lin-chen-2010-risk","url":"https:\/\/aclanthology.org\/P10-1009","title":"A Risk Minimization Framework for Extractive Speech Summarization","abstract":"In this paper, we formulate extractive summarization as a risk minimization problem and propose a unified probabilistic framework that naturally combines supervised and unsupervised summarization models to inherit their individual merits as well as to overcome their inherent limitations. In addition, the introduction of various loss functions also provides the summarization framework with a flexible but systematic way to render the redundancy and coherence relationships among sentences and between sentences and the whole document, respectively. Experiments on speech summarization show that the methods deduced from our framework are very competitive with existing summarization approaches.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2010,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} +{"ID":"zhou-etal-2021-defense","url":"https:\/\/aclanthology.org\/2021.acl-long.426","title":"Defense against Synonym Substitution-based Adversarial Attacks via Dirichlet Neighborhood Ensemble","abstract":"Although deep neural networks have achieved prominent performance on many NLP tasks, they are vulnerable to adversarial examples. We propose Dirichlet Neighborhood Ensemble (DNE), a randomized method for training a robust model to defense synonym substitutionbased attacks. During training, DNE forms virtual sentences by sampling embedding vectors for each word in an input sentence from a convex hull spanned by the word and its synonyms, and it augments them with the training data. In such a way, the model is robust to adversarial attacks while maintaining the performance on the original clean data. DNE is agnostic to the network architectures and scales to large models (e.g., BERT) for NLP applications. Through extensive experimentation, we demonstrate that our method consistently outperforms recently proposed defense methods by a significant margin across different network architectures and multiple data sets.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"This work was supported by Shanghai Municipal Science and Technology Major Project (No. 2021SHZDZX0103), National Science Foundation of China (No. 62076068) and Zhangjiang Lab.","year":2021,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} +{"ID":"lin-yu-2020-adaptive","url":"https:\/\/aclanthology.org\/2020.rocling-1.22","title":"An Adaptive Method for Building a Chinese Dimensional Sentiment Lexicon","abstract":"16\u300117\u300118]\uff0c\u4f7f\u7aef\u5230\u7aef(End-to-End)\u7684\u5012\u50b3\u905e(Back-propagation)\u904e\u7a0b\u4e2d\uff0c\u81ea\u52d5\u8abf\u6574\u795e\u7d93 \u5143\u7684\u53c3\u6578\u9054\u6210\u6700\u5c0f\u5316\u8aa4\u5dee\uff0c\u800c\u5728\u67b6\u69cb\u4e0a\u5247\u5927\u81f4\u53ef\u5206\u70ba\u7de8\u78bc\u5668(Encoder)\u53ca\u89e3\u78bc\u5668(Decoder) \u7b49 2 \u500b\u90e8\u5206\uff0c\u7de8\u78bc\u5668\u90e8\u5206\u8ca0\u8cac\u5f9e\u539f\u59cb\u8cc7\u6599\u4e2d\u8403\u53d6\u7279\u5fb5\uff0c\u89e3\u78bc\u5668\u5247\u8ca0\u8cac\u5f9e\u8403\u53d6\u5b8c\u6210\u7684\u7279\u5fb5 \u89e3 \u78bc \u70ba \u76ee \u6a19 \u503c \u3002 \u56e0 \u6df1 \u5ea6 \u5b78 \u7fd2 \u67b6 \u69cb \u5177 \u6709 \u7de8 \u78bc \u5668 \uff0c \u5176 \u900f \u904e \u6620 \u5c04 (Mapping) \u53ef \u4fdd \u7559 \u8868 \u5fb5 (Representation)\uff0c\u56e0\u6b64\u64c1\u6709\u512a\u7570\u7684\u8868\u5fb5\u5b78\u7fd2\u80fd\u529b[19\u300120\u300121]\u3002 \u5982\u8a5e\u5d4c\u5165(Word Embedding)[\nThe 32nd Conference on Computational Linguistics and Speech Processing (ROCLING 2020) Taipei, Taiwan, September 24-26, 2020. The Association for Computational Linguistics and Chinese Language Processing","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2020,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} +{"ID":"he-etal-2020-learning","url":"https:\/\/aclanthology.org\/2020.coling-main.106","title":"Learning Efficient Task-Specific Meta-Embeddings with Word Prisms","abstract":"Word embeddings are trained to predict word cooccurrence statistics, which leads them to possess different lexical properties (syntactic, semantic, etc.) depending on the notion of context defined at training time. These properties manifest when querying the embedding space for the most similar vectors, and when used at the input layer of deep neural networks trained to solve downstream NLP problems. Meta-embeddings combine multiple sets of differently trained word embeddings, and have been shown to successfully improve intrinsic and extrinsic performance over equivalent models which use just one set of source embeddings. We introduce word prisms: a simple and efficient meta-embedding method that learns to combine source embeddings according to the task at hand. Word prisms learn orthogonal transformations to linearly combine the input source embeddings, which allows them to be very efficient at inference time. We evaluate word prisms in comparison to other meta-embedding methods on six extrinsic evaluations and observe that word prisms offer improvements in performance on all tasks. 1 * Equal contribution. \u2020 This work was pursued prior to Kian's employment at BMO.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"This work is supported by the Fonds de recherche du Qu\u00e9bec -Nature et technologies, by the Natural Sciences and Engineering Research Council of Canada, and by Compute Canada. The last author is supported in part by the Canada CIFAR AI Chair program.","year":2020,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} +{"ID":"li-etal-2007-semantic","url":"https:\/\/aclanthology.org\/P07-1016","title":"Semantic Transliteration of Personal Names","abstract":"Words of foreign origin are referred to as borrowed words or loanwords. A loanword is usually imported to Chinese by phonetic transliteration if a translation is not easily available. Semantic transliteration is seen as a good tradition in introducing foreign words to Chinese. Not only does it preserve how a word sounds in the source language, it also carries forward the word's original semantic attributes. This paper attempts to automate the semantic transliteration process for the first time. We conduct an inquiry into the feasibility of semantic transliteration and propose a probabilistic model for transliterating personal names in Latin script into Chinese. The results show that semantic transliteration substantially and consistently improves accuracy over phonetic transliteration in all the experiments.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2007,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} +{"ID":"nabizadeh-etal-2020-myfixit","url":"https:\/\/aclanthology.org\/2020.lrec-1.260","title":"MyFixit: An Annotated Dataset, Annotation Tool, and Baseline Methods for Information Extraction from Repair Manuals","abstract":"Text instructions are among the most widely used media for learning and teaching. Hence, to create assistance systems that are capable of supporting humans autonomously in new tasks, it would be immensely productive, if machines were enabled to extract task knowledge from such text instructions. In this paper, we, therefore, focus on information extraction (IE) from the instructional text in repair manuals. This brings with it the multiple challenges of information extraction from the situated and technical language in relatively long and often complex instructions. To tackle these challenges, we introduce a semi-structured dataset of repair manuals. The dataset is annotated in a large category of devices, with information that we consider most valuable for an automated repair assistant, including the required tools and the disassembled parts at each step of the repair progress. We then propose methods that can serve as baselines for this IE task: an unsupervised method based on a bags-of-n-grams similarity for extracting the needed tools in each repair step, and a deep-learning-based sequence labeling model for extracting the identity of disassembled parts. These baseline methods are integrated into a semi-automatic web-based annotator application that is also available along with the dataset.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2020,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} +{"ID":"volk-1997-probing","url":"https:\/\/aclanthology.org\/P97-1015","title":"Probing the Lexicon in Evaluating Commercial MT Systems","abstract":"In the past the evaluation of machine translation systems has focused on single system evaluations because there were only few systems available. But now there are several commercial systems for the same language pair. This requires new methods of comparative evaluation. In the paper we propose a black-box method for comparing the lexical coverage of MT systems. The method is based on lists of words from different frequency classes. It is shown how these word lists can be compiled and used for testing. We also present the results of using our method on 6 MT systems that translate between English and German.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":1997,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} +{"ID":"johnson-riezler-2000-exploiting","url":"https:\/\/aclanthology.org\/A00-2021","title":"Exploiting auxiliary distributions in stochastic unification-based grammars","abstract":"This paper describes a method for estimating conditional probability distributions over the parses of \"unification-based\" grammars which can utilize auxiliary distributions that are estimated by other means. We show how this can be used to incorporate information about lexical selectional preferences gathered from other sources into Stochastic \"Unificationbased\" Grammars (SUBGs). While we apply this estimator to a Stochastic Lexical-Functional Grammar, the method is general, and should be applicable to stochastic versions of HPSGs, categorial grammars and transformational grammars.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2000,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} +{"ID":"palmer-etal-2000-semantic","url":"http:\/\/www.lrec-conf.org\/proceedings\/lrec2000\/pdf\/197.pdf","title":"Semantic Tagging for the Penn Treebank","abstract":"This paper describes the methodology that is being used to augment the Penn Treebank annotation with sense tags and other types of semantic information. Inspired by the results of SENSEVAL, and the high inter-annotator agreement that was achieved there, similar methods were used for a pilot study of 5000 words of running text from the Penn Treebank. Using the same techniques of allowing the annotators to discuss difficult tagging cases and to revise WordNet entries if necessary, comparable inter-annotator rates have been achieved. The criteria for determining appropriate revisions and ensuring clear sense distinctions are described. We are also using hand correction of automatic predicate argument structure information to provide additional thematic role labeling.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"This paper reports on work supported by NSF grant IIS-9800658.","year":2000,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} +{"ID":"wood-doughty-etal-2022-model","url":"https:\/\/aclanthology.org\/2022.bionlp-1.41","title":"Model Distillation for Faithful Explanations of Medical Code Predictions","abstract":"Machine learning models that offer excellent predictive performance often lack the interpretability necessary to support integrated human machine decision-making. In clinical or other high-risk settings, domain experts may be unwilling to trust model predictions without explanations. Work in explainable AI must balance competing objectives along two different axes: 1) Models should ideally be both accurate and simple. 2) Explanations must balance faithfulness to the model's decisionmaking with their plausibility to a domain expert. We propose to use knowledge distillation, or training a student model that mimics the behavior of a trained teacher model, as a technique to generate faithful and plausible explanations. We evaluate our approach on the task of assigning ICD codes to clinical notes to demonstrate that the student model is faithful to the teacher model's behavior and produces quality natural language explanations.","label_nlp4sg":1,"task":[],"method":[],"goal1":"Good Health and Well-Being","goal2":null,"goal3":null,"acknowledgments":"We acknowledge support provided by the Johns Hopkins Institute for Assured Autonomy. We thank Sarah Wiegreffe and Jacob Eisenstein for their help and plausibility annotations.","year":2022,"sdg1":0,"sdg2":0,"sdg3":1,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} +{"ID":"croce-etal-2019-auditing","url":"https:\/\/aclanthology.org\/D19-1415","title":"Auditing Deep Learning processes through Kernel-based Explanatory Models","abstract":"While NLP systems become more pervasive, their accountability gains value as a focal point of effort. Epistemological opaqueness of nonlinear learning methods, such as deep learning models, can be a major drawback for their adoptions. In this paper, we discuss the application of Layerwise Relevance Propagation over a linguistically motivated neural architecture, the Kernel-based Deep Architecture, in order to trace back connections between linguistic properties of input instances and system decisions. Such connections then guide the construction of argumentations on the network's inferences, i.e., explanations based on real examples that are semantically related to the input. We also propose here a methodology to evaluate the transparency and coherence of analogy-based explanations modeling an audit stage for the system. Quantitative analysis on two semantic tasks, i.e., question classification and semantic role labeling, shows that the explanatory capabilities (native in KDAs) are effective and they pave the way to more complex argumentation methods.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2019,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} +{"ID":"mohri-etal-2004-statistical","url":"https:\/\/aclanthology.org\/P04-1008","title":"Statistical Modeling for Unit Selection in Speech Synthesis","abstract":"Traditional concatenative speech synthesis systems use a number of heuristics to define the target and concatenation costs, essential for the design of the unit selection component. In contrast to these approaches, we introduce a general statistical modeling framework for unit selection inspired by automatic speech recognition. Given appropriate data, techniques based on that framework can result in a more accurate unit selection, thereby improving the general quality of a speech synthesizer. They can also lead to a more modular and a substantially more efficient system. We present a new unit selection system based on statistical modeling. To overcome the original absence of data, we use an existing high-quality unit selection system to generate a corpus of unit sequences. We show that the concatenation cost can be accurately estimated from this corpus using a statistical n-gram language model over units. We used weighted automata and transducers for the representation of the components of the system and designed a new and more efficient composition algorithm making use of string potentials for their combination. The resulting statistical unit selection is shown to be about 2.6 times faster than the last release of the AT&T Natural Voices Product while preserving the same quality, and offers much flexibility for the use and integration of new and more complex components.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"We thank Mark Beutnagel for helping us clarify some of the details of the unit selection system in the AT&T Natural Voices Product speech synthesizer. Mark also generated the training corpora and set up the listening test used in our experiments.We also acknowledge discussions with Brian Roark about various statistical language modeling topics in the context of unit selection.","year":2004,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} +{"ID":"clark-gardner-2018-simple","url":"https:\/\/aclanthology.org\/P18-1078","title":"Simple and Effective Multi-Paragraph Reading Comprehension","abstract":"We introduce a method of adapting neural paragraph-level question answering models to the case where entire documents are given as input. Most current question answering models cannot scale to document or multi-document input, and naively applying these models to each paragraph independently often results in them being distracted by irrelevant text. We show that it is possible to significantly improve performance by using a modified training scheme that teaches the model to ignore non-answer containing paragraphs. Our method involves sampling multiple paragraphs from each document, and using an objective function that requires the model to produce globally correct output. We additionally identify and improve upon a number of other design decisions that arise when working with document-level data. Experiments on TriviaQA and SQuAD shows our method advances the state of the art, including a 10 point gain on TriviaQA.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2018,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} +{"ID":"barbella-forbus-2010-analogical","url":"https:\/\/aclanthology.org\/W10-0912","title":"Analogical Dialogue Acts: Supporting Learning by Reading Analogies","abstract":"Analogy is heavily used in written explanations, particularly in instructional texts. We introduce the concept of analogical dialogue acts (ADAs) which represent the roles utterances play in instructional analogies. We describe a catalog of such acts, based on ideas from structure-mapping theory. We focus on the operations that these acts lead to while understanding instructional texts, using the Structure-Mapping Engine (SME) and dynamic case construction in a computational model. We test this model on a small corpus of instructional analogies, expressed in simplified English, which were understood via a semiautomatic natural language system using analogical dialogue acts. The model enabled a system to answer questions after understanding the analogies that it was not able to answer without them.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"This research was supported by the Intelligent and Autonomous Systems Program of the Office of Naval Research.","year":2010,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} +{"ID":"sundheim-1991-third","url":"https:\/\/aclanthology.org\/H91-1059","title":"Third Message Understanding Evaluation and Conference (MUC-3): Phase 1 Status Report","abstract":"The Naval Ocean Systems Center is conducting the third in a series of evaluations of English text analysis systems. The premise on which the evaluations are based is that task-oriented tests enable straightforward comparisons among systems and provide useful quantitative data on the state of the art in text understanding. Furthermore, the data can be interpreted in light of information known about each system's text analysis techniques in order to yield qualitative insights into the relative validity of those techniques as applied to the general problem of information extraction. A dry-run phase of the third evaluation was completed in February, 1991, and the official testing will be done in May, 1991, concluding with the Third Message Understanding Conference (MUC-3). Twelve sites reported results for the dryrun test at a meeting held in February, 1991. All systems are being evaluated on the basis of performance on the information extraction task in a blind test at the end of each phase of the evaluation.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"The author is indebted to all the organizations participating in MUC-3 and to certain individuals in particular who have contributed extra time and energy to ensure the evaluation's success, among them Laura Balcom, Scan Boisen, Nancy Chinchor, Ralph Grishman, Pete Halverson, Jerry Hobbs, Cheryl Kariya, George Krupka, David Lewis, Lisa Rau, John Sterling, Charles Wayne, and Carl Weir.","year":1991,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} +{"ID":"kementchedjhieva-etal-2021-john","url":"https:\/\/aclanthology.org\/2021.findings-acl.429","title":"John praised Mary because \\_he\\_? Implicit Causality Bias and Its Interaction with Explicit Cues in LMs","abstract":"Some interpersonal verbs can implicitly attribute causality to either their subject or their object and are therefore said to carry an implicit causality (IC) bias. Through this bias, causal links can be inferred from a narrative, aiding language comprehension. We investigate whether pre-trained language models (PLMs) encode IC bias and use it at inference time. We find that to be the case, albeit to different degrees, for three distinct PLM architectures. However, causes do not always need to be implicit-when a cause is explicitly stated in a subordinate clause, an incongruent IC bias associated with the verb in the main clause leads to a delay in human processing. We hypothesize that the temporary challenge humans face in integrating the two contradicting signals, one from the lexical semantics of the verb, one from the sentence-level semantics, would be reflected in higher error rates for models on tasks dependent on causal links. The results of our study lend support to this hypothesis, suggesting that PLMs tend to prioritize lexical patterns over higher-order signals.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"We thank Daniel Hershcovich, Ana Valeria Gonz\u00e1lez, Emanuele Bugliarello, and Mareike Hartmann for feedback on the drafts of this paper. We thank Desmond Elliott, Stella Frank and Dustin Wright, and Mareike Hartmann for their help with the annotation of the newly developed stimuli.Yova was funded by Innovation Fund Denmark, under the AutoML4CS project. Mark received funding from the European Research Council (ERC) under the European Union's Horizon 2020 research and innovation programme (FAST-PARSE, grant agreement No 714150) and from the Centro de Investigaci\u00f3n de Galicia (CITIC) which is funded by the Xunta de Galicia and the European Union (ERDF -Galicia 2014-2020 Program) by grant ED431G 2019\/01.","year":2021,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} +{"ID":"okada-miura-1982-conceptual","url":"https:\/\/aclanthology.org\/C82-2051","title":"Conceptual Taxonomy of Japanese Adjectives for Understanding Natural Language and Picture Patterns","abstract":"This paper presents a conceptual taxonomy of Japanese adjectives, succeeding that on Japanese verbs'. In this taxo-n~ny, natural language is associated with real world things --matter, events, attributes -and mental activities -spiritual and sensual. Adjective concepts are divided into two large classes, simple and non-simple. Simple concepts cannot be reduced into further elementary adjective concepts, whereas non-simple ones can be. Roughly speaking, simple concepts are concrete and can be directly associated with physical and mental attributes, whereas non-simple ones are abstract and indirectly associated with them.\nVerb concepts were well understood as \"change\" fro~ state S O to state S 1 as shown in Fig. 14 Adjective concepts are considered to be captured as the \"difference\" between objects O O and 0 I. Ylg.2 shows how the difference in vertical length between 00 and 01 brings about the concept of \"high\". Notice that surface structures often lack the expression of 00 like \"yama-ga takai (the mountain is high)\". Since the meaning of \"high\" cannot be expressed only by O 1, deep structures need O 0 as an object for comparison. otoko-ga ie-kara deru.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":1982,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} +{"ID":"tanaka-2002-measuring","url":"https:\/\/aclanthology.org\/C02-1065","title":"Measuring the Similarity between Compound Nouns in Different Languages Using Non-Parallel Corpora","abstract":"This paper presents a method that measures the similarity between compound nouns in different languages to locate translation equivalents from corpora. The method uses information from unrelated corpora in different languages that do not have to be parallel. This means that many corpora can be used. The method compares the contexts of target compound nouns and translation candidates in the word or semantic attribute level. In this paper, we show how this measuring method can be applied to select the best English translation candidate for Japanese compound nouns in more than 70% of the cases.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"This research was supported in part by the Research Collaboration between NTT Communication Science Laboratories, Nippon Telegraph and Telephone Corporation and CSLI, Stanford University. The author would like to thank Timothy Baldwin of CSLI and Francis Bond of NTT for their valuable comments.","year":2002,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} +{"ID":"wei-etal-2009-co","url":"https:\/\/aclanthology.org\/P09-2030","title":"Co-Feedback Ranking for Query-Focused Summarization","abstract":"In this paper, we propose a novel ranking framework-Co-Feedback Ranking (Co-FRank), which allows two base rankers to supervise each other during the ranking process by providing their own ranking results as feedback to the other parties so as to boost the ranking performance. The mutual ranking refinement process continues until the two base rankers cannot learn from each other any more. The overall performance is improved by the enhancement of the base rankers through the mutual learning mechanism. We apply this framework to the sentence ranking problem in query-focused summarization and evaluate its effectiveness on the DUC 2005 data set. The results are promising.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"The work described in this paper was supported by the Hong Kong Polytechnic University internal the grants (G-YG80 and G-YH53) and the China NSF grant (60703008).","year":2009,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} +{"ID":"kroll-etal-2014-study","url":"https:\/\/aclanthology.org\/W14-6006","title":"A Study of Scientific Writing: Comparing Theoretical Guidelines with Practical Implementation","abstract":"Good scientific writing is a skill researchers seek to acquire. Textbook literature provides guidelines to improve scientific writing, for instance, \"use active voice when describing your own work\". In this paper we investigate to what extent researchers adhere to textbook principles in their articles. In our analyses we examine a set of selected principles which (i) are general and (ii) verifiable by applying text mining and natural language processing techniques. We develop a framework to automatically analyse a large data set containing \u223c14.000 scientific articles received from Mendeley and PubMed. We are interested in whether adhering to writing principles is related to scientific quality, scientific domain or gender and whether these relations change over time. Our results show (i) a clear relation between journal quality and scientific imprecision, i.e. journals with low impact factors exhibit higher numbers of imprecision indicators such as number of citation bunches and number of relativating words and (ii) that writing style partly depends on domain characteristics and preferences.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"We thank Mendeley for providing the data set as well as Werner Klieber for crawling the PubMed data set. The presented work was developed within the CODE project funded by the EU FP7 (grant no. 296150). The Know-Center is funded within the Austrian COMET Program -Competence Centers for Excellent Technologies -under the auspices of the Austrian Federal Ministry of Transport, Innovation and Technology, the Austrian Federal Ministry of Economy, Family and Youth and by the State of Styria. COMET is managed by the Austrian Research Promotion Agency FFG.","year":2014,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} +{"ID":"horne-etal-2020-grubert","url":"https:\/\/aclanthology.org\/2020.aacl-srw.19","title":"GRUBERT: A GRU-Based Method to Fuse BERT Hidden Layers for Twitter Sentiment Analysis","abstract":"In this work, we introduce a GRU-based architecture called GRUBERT that learns to map the different BERT hidden layers to fused embeddings with the aim of achieving high accuracy on the Twitter sentiment analysis task. Tweets are known for their highly diverse language, and by exploiting different linguistic information present across BERT hidden layers, we can capture the full extent of this language at the embedding level. Our method can be easily adapted to other embeddings capturing different linguistic information. We show that our method outperforms well-known heuristics of using BERT (e.g. using only the last layer) and other embeddings such as ELMo. We observe potential label noise resulting from the data acquisition process and employ early stopping as well as a voting classifier to overcome it.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"We thank the Data Analytics Lab at ETH Zurich for providing computing infrastructure. We also thank them, in addition to our mentor Shuhei Kurita and the anonymous reviewers, for valuable feedback.","year":2020,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} +{"ID":"lu-roth-2015-joint","url":"https:\/\/aclanthology.org\/D15-1102","title":"Joint Mention Extraction and Classification with Mention Hypergraphs","abstract":"We present a novel model for the task of joint mention extraction and classification. Unlike existing approaches, our model is able to effectively capture overlapping mentions with unbounded lengths. The model is highly scalable, with a time complexity that is linear in the number of words in the input sentence and linear in the number of possible mention classes. Our model can be extended to additionally capture mention heads explicitly in a joint manner under the same time complexity. We demonstrate the effectiveness of our model through extensive experiments on standard datasets.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"We would like to thank Kian Ming A. Chai, Hai Leong Chieu and the three anonymous reviewers for their comments on this work. This work is supported by Temasek Lab of Singapore University of Technology and Design project IGDSS1403011 and IGDST1403013, and is partly supported by DARPA (under agreement number FA8750-13-2-0008).","year":2015,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} +{"ID":"jansen-ustalov-2019-textgraphs","url":"https:\/\/aclanthology.org\/D19-5309","title":"TextGraphs 2019 Shared Task on Multi-Hop Inference for Explanation Regeneration","abstract":"While automated question answering systems are increasingly able to retrieve answers to natural language questions, their ability to generate detailed human-readable explanations for their answers is still quite limited. The Shared Task on Multi-Hop Inference for Explanation Regeneration tasks participants with regenerating detailed gold explanations for standardized elementary science exam questions by selecting facts from a knowledge base of semistructured tables. Each explanation contains between 1 and 16 interconnected facts that form an \"explanation graph\" spanning core scientific knowledge and detailed world knowledge. It is expected that successfully combining these facts to generate detailed explanations will require advancing methods in multihop inference and information combination, and will make use of the supervised training data provided by the WorldTree explanation corpus. The top-performing system achieved a mean average precision (MAP) of 0.56, substantially advancing the state-of-the-art over a baseline information retrieval model. Detailed extended analyses of all submitted systems showed large relative improvements in accessing the most challenging multi-hop inference problems, while absolute performance remains low, highlighting the difficulty of generating detailed explanations through multihop reasoning.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"who were funded by the Allen Institute for Artificial Intelligence (AI2). Peter Jansen's work on the shared task was supported by National Science Foundation (NSF Award #1815948, \"Explainable Natural Language Inference\"). Dmitry Ustalov's work on the shared task at the University of Mannheim was supported by the Deutsche Forschungsgemeinschaft (DFG) foundation under the \"JOIN-T\" project.","year":2019,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} +{"ID":"engelbrecht-schultz-2005-rapid","url":"https:\/\/aclanthology.org\/2005.iwslt-1.22","title":"Rapid Development of an Afrikaans English Speech-to-Speech Translator","abstract":"In this paper we investigate the rapid deployment of a twoway Afrikaans to English Speech-to-Speech Translation system. We discuss the approaches and amount of work involved to port a system to a new language pair, i.e. the steps required to rapidly adapt ASR, MT and TTS component to Afrikaans under limited time and data constraints. The resulting system represents the first prototype built for Afrikaans to English speech translation.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"The authors wish to thank the following persons for their contributions: Paisarn Charoenpornsawat, Alan Black, Matthias Eck, Bing Zhao, Szu-Chen Jou, Susanne Burger and Thomas Schaaf.","year":2005,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} +{"ID":"verhagen-2010-brandeis","url":"http:\/\/www.lrec-conf.org\/proceedings\/lrec2010\/pdf\/740_Paper.pdf","title":"The Brandeis Annotation Tool","abstract":"The Brandeis Annotation Tool is a web-based text annotation tool that is centered around the notions of layered annotation and task decomposition. It allows annotations to refer to other annotations and to take a complicated task and split it into easier subtasks. The web-interface connects annotators to a central repository for all data and simplifies many of the housekeeping tasks while keeping requirements at a minimum (that is, users only need an internet connection and a well-behaved browser). BAT has been used mainly for temporal annotation, but can be considered a more general tool for several kinds of textual annotation.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2010,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} +{"ID":"silverman-etal-1992-towards","url":"https:\/\/aclanthology.org\/H92-1088","title":"Towards Using Prosody in Speech Recognition\/Understanding Systems: Differences Between Read and Spontaneous Speech","abstract":"A persistent problem for keyword-driven speech recognition systems is that users often embed the to-be-recognized words or phrases in longer utterances. The recognizer needs to locate the relevant sections of the speech signal and ignore extraneous words. Prosody might provide an extra source of information to help locate target words embedded in other speech. In this paper we examine some prosodic characteristics of 160 such utterances and compare matched read and spontaneous versions. Half of the utterances are from a corpus of spontaneous answers to requests for the name of a city, recorded from calls to Directory Assistance Operators. The other half are the same word strings read by volunteers attempting to model the real dialogue. Results show a consistent pattern across both sets of data: embedded city names almost always bear nuclear pitch accents and are in their own intonational phrases. However the distributions of tonal make-up of these prosodic features differ markedly in read versus spontaneous speech, implying that if algorithms that exploit these prosodic regularities are trained on read speech, then the probabilities are likely to be incorrect models of real user speech.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"Sheri Walzman learned prosodic transcription and labored long doing careful labelling. Lisa Russell developed the automated recording facility, helped find suitable volunteers, and imposed organization and order on the data collection effort. Without the help of these two people this work would never have seen the light of day. Any abuses of their work nevertheless remain our own responsibility.","year":1992,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} +{"ID":"wu-wang-2019-ji","url":"https:\/\/aclanthology.org\/2019.rocling-1.7","title":"\u57fa\u65bcBERT\u6a21\u578b\u4e4b\u591a\u570b\u8a9e\u8a00\u6a5f\u5668\u95b1\u8b80\u7406\u89e3\u7814\u7a76(Multilingual Machine Reading Comprehension based on BERT Model)","abstract":"In recent years, Internet provides more and more information for people in daily life. Due to the limitation of information retrieval techniques, information retrieved might not be related and helpful for users. Two ","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2019,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} +{"ID":"sagawa-etal-1994-parser","url":"https:\/\/aclanthology.org\/C94-1098","title":"A Parser Coping With Self-Repaired Japanese Utterances and Large Corpus-Based Evaluation","abstract":"Self-repair (Levelt 1988 ) is a repair of utterance by speaker him\/herself. A truman speaker makes self-repairs very frequently in spontaneous speedt. (Blackmer and Mitton 1991) reported that self-repairs are made once every 4.8 seconds in dialogues taken fi'om radio talk shows.\nSelf-repair is one ldnd of \"permissible illformedness\", that is a human listener can feel ill-formedness in it hut he\/she is able to recognize its intended meaning. Thus your partner does not need to interrupt dialogue.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":1994,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} +{"ID":"xie-etal-2021-importance","url":"https:\/\/aclanthology.org\/2021.acl-long.445","title":"Importance-based Neuron Allocation for Multilingual Neural Machine Translation","abstract":"Multilingual neural machine translation with a single model has drawn much attention due to its capability to deal with multiple languages. However, the current multilingual translation paradigm often makes the model tend to preserve the general knowledge, but ignore the language-specific knowledge. Some previous works try to solve this problem by adding various kinds of language-specific modules to the model, but they suffer from the parameter explosion problem and require specialized manual design. To solve these problems, we propose to divide the model neurons into general and language-specific parts based on their importance across languages. The general part is responsible for preserving the general knowledge and participating in the translation of all the languages, while the language-specific part is responsible for preserving the languagespecific knowledge and participating in the translation of some specific languages. Experimental results on several language pairs, covering IWSLT and Europarl corpus datasets, demonstrate the effectiveness and universality of the proposed method.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"We thank all the anonymous reviewers for their insightful and valuable comments. This work was supported by National Key R&D Program of China (NO. 2017YFE0192900).","year":2021,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} +{"ID":"furuse-1994-transfer","url":"https:\/\/aclanthology.org\/1994.amta-1.32","title":"Transfer-Driven Machine Translation","abstract":"Transfer-Driven Machine Translation (TDMT) [1, 2] is a translation technique developed as a research project at ATR Interpreting Telecommunications Research Laboratories. In TDMT, translation is performed mainly by a transfer module which applies transfer knowledge to an input sentence. Other modules, such as lexical processing, analysis, contextual processing and generation, cooperate with the transfer module to improve translation performance. This transfer-centered mechanism can achieve efficient and robust translation by making the most of the example-based framework, which calculates a semantic distance between linguistic expressions. A TDMT prototype system is written in LISP and is demonstrated on a SUN workstation. In our TDMT demonstration, the following items are presented.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":1994,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} +{"ID":"popowich-1985-saumer","url":"https:\/\/aclanthology.org\/E85-1007","title":"SAUMER: Sentence Analysis Using Metarules","abstract":"The SAUMER system uses specifications of natural language grammars, which consist of rules and metarules. to provide a semantic interpretation of an input sentence. The SAUMER ' Specification Language (SSL) is a programming language which combin~ some of the features of generalised phrase structure grammars (Gazdar. 1981). like the correspondence between syntactic and semantic rules, with definite clause grammars (DCC-s) (Pereira and Warren. 1980) to create an executable grammar specification. SSL rules are similar to DCG rules except that they contain a semantic component and may also be left recursive. Metarules are used to generate new rules trom existing rules before any parsing is attempted. A.n implementation is tested which can provide semantic interpretations for sentences containing tepicalisation, relative clauses, passivisation, and questions. 111 should also be noted that. due Io the separabili'~y of the semantic component from \",he grammar rule, \u2022 different semantic notation could easily be introduced at long as ~u~ app~priate ~.mantic proce~in8 rou~dne$ were replaced. The use of SAUMER with \"an \"Al-adap'md\" version of Mon~ue's Intensional Logic\" is being examined by Fawc\u00a9It (1984),","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"l would like to thank Nick Cercone for reading an earlier version of this paper and providing some useful suggestions.The comments of the referees were also helpful. Facilities for this research were provided by the Laboratory for Computer and Communications Research. ","year":1985,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} +{"ID":"klie-etal-2021-human","url":"https:\/\/aclanthology.org\/2021.dash-1.6","title":"Human-In-The-LoopEntity Linking for Low Resource Domains","abstract":"Entity linking (EL) is concerned with disambiguating entity mentions in a text against a knowledge base (KB). To quickly annotate texts with EL in low-resource domains and noisy text, we present a novel Human-In-The-Loop EL approach. We show that it greatly outperforms a strong baseline in simulation. In a user study, annotation time is reduced by 35 % compared to annotating without interactive support; users report that they strongly prefer our new approach. An open-source and readyto-use implementation based on the text annotation platform INCEpTION 1 is made available 2 .","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2021,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} +{"ID":"ozdowska-2008-cross","url":"http:\/\/www.lrec-conf.org\/proceedings\/lrec2008\/pdf\/207_paper.pdf","title":"Cross-Corpus Evaluation of Word Alignment","abstract":"We present the procedures we implemented to carry out system oriented evaluation of a syntax-based word aligner-ALIBI. We take the approach of regarding cross-corpus evaluation as part of system oriented evaluation assuming that corpus type may impact alignment performance. We test our system on three English-French parallel corpora. The evaluation procedures include the creation of a reference set with multiple annotations of the same data for each corpus, the assessment of inter-annotator agreement rates and an analysis of the reference sets. We show that alignment performance varies across corpora according to the multiple references produced and further motivate our choice of preserving all reference annotations without solving disagreements between annotators.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"Thanks to Science Foundation Ireland (http:\/\/www. sfi.ie) Principal Investigator Award 05\/IN\/1732 for part-funding this research.","year":2008,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} +{"ID":"afantenos-etal-2010-learning","url":"http:\/\/www.lrec-conf.org\/proceedings\/lrec2010\/pdf\/582_Paper.pdf","title":"Learning Recursive Segments for Discourse Parsing","abstract":"Automatically detecting discourse segments is an important preliminary step towards full discourse parsing. Previous research on discourse segmentation have relied on the assumption that elementary discourse units (EDUs) in a document always form a linear sequence (i.e., they can never be nested). Unfortunately, this assumption turns out to be too strong, for some theories of discourse like SDRT allows for nested discourse units. In this paper, we present a simple approach to discourse segmentation that is able to produce nested EDUs. Our approach builds on standard multi-class classification techniques combined with a simple repairing heuristic that enforces global coherence. Our system was developed and evaluated on the first round of annotations provided by the French Annodis project (an ongoing effort to create a discourse bank for French). Cross-validated on only 47 documents (1, 445 EDUs), our system achieves encouraging performance results with an F-score of 73% for finding EDUs.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2010,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} +{"ID":"yu-jiang-2015-hassle","url":"https:\/\/aclanthology.org\/P15-2028","title":"A Hassle-Free Unsupervised Domain Adaptation Method Using Instance Similarity Features","abstract":"We present a simple yet effective unsupervised domain adaptation method that can be generally applied for different NLP tasks. Our method uses unlabeled target domain instances to induce a set of instance similarity features. These features are then combined with the original features to represent labeled source domain instances. Using three NLP tasks, we show that our method consistently outperforms a few baselines, including SCL, an existing general unsupervised domain adaptation method widely used in NLP. More importantly, our method is very easy to implement and incurs much less computational cost than SCL.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"We would like to thank the reviewers for their valuable comments.","year":2015,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} +{"ID":"wu-etal-2018-word","url":"https:\/\/aclanthology.org\/D18-1482","title":"Word Mover's Embedding: From Word2Vec to Document Embedding","abstract":"While the celebrated Word2Vec technique yields semantically rich representations for individual words, there has been relatively less success in extending to generate unsupervised sentences or documents embeddings. Recent work has demonstrated that a distance measure between documents called Word Mover's Distance (WMD) that aligns semantically similar words, yields unprecedented KNN classification accuracy. However, WMD is expensive to compute, and it is hard to extend its use beyond a KNN classifier. In this paper, we propose the Word Mover's Embedding (WME), a novel approach to building an unsupervised document (sentence) embedding from pre-trained word embeddings. In our experiments on 9 benchmark text classification datasets and 22 textual similarity tasks, the proposed technique consistently matches or outperforms state-of-the-art techniques, with significantly higher accuracy on problems of short length.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2018,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} +{"ID":"mensa-etal-2017-ttcs","url":"https:\/\/aclanthology.org\/W17-1912","title":"TTCS$^\\mathcalE$: a Vectorial Resource for Computing Conceptual Similarity","abstract":"In this paper we introduce the TTCS E , a linguistic resource that relies on BabelNet, NASARI and ConceptNet, that has now been used to compute the conceptual similarity between concept pairs. The conceptual representation herein provides uniform access to concepts based on Babel-Net synset IDs, and consists of a vectorbased semantic representation which is compliant with the Conceptual Spaces, a geometric framework for common-sense knowledge representation and reasoning. The TTCS E has been evaluated in a preliminary experimentation on a conceptual similarity task.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2017,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} +{"ID":"cui-etal-2017-attention","url":"https:\/\/aclanthology.org\/P17-1055","title":"Attention-over-Attention Neural Networks for Reading Comprehension","abstract":"Cloze-style reading comprehension is a representative problem in mining relationship between document and query. In this paper, we present a simple but novel model called attention-over-attention reader for better solving cloze-style reading comprehension task. The proposed model aims to place another attention mechanism over the document-level attention and induces \"attended attention\" for final answer predictions. One advantage of our model is that it is simpler than related works while giving excellent performance. In addition to the primary model, we also propose an N-best re-ranking strategy to double check the validity of the candidates and further improve the performance. Experimental results show that the proposed methods significantly outperform various state-ofthe-art systems by a large margin in public datasets, such as CNN and Children's Book Test.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"We would like to thank all three anonymous reviewers for their thorough reviewing and providing thoughtful comments to improve our paper. This work was supported by the National 863 Leading Technology Research Project via grant 2015AA015409.","year":2017,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} +{"ID":"baur-etal-2016-shared","url":"https:\/\/aclanthology.org\/L16-1036","title":"A Shared Task for Spoken CALL?","abstract":"We argue that the field of spoken CALL needs a shared task in order to facilitate comparisons between different groups and methodologies, and describe a concrete example of such a task, based on data collected from a speech-enabled online tool which has been used to help young Swiss German teens practise skills in English conversation. Items are prompt-response pairs, where the prompt is a piece of German text and the response is a recorded English audio file. The task is to label pairs as \"accept\" or \"reject\", accepting responses which are grammatically and linguistically correct to match a set of hidden gold standard answers as closely as possible. Initial resources are provided so that a scratch system can be constructed with a minimal investment of effort, and in particular without necessarily using a speech recogniser. Training data for the task will be released in June 2016, and test data in January 2017.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"Work at Geneva University was supported by the Swiss National Science Foundation (SNF) under grant 105219 153278\/1. We would like to thank Nuance for making their software available to us for research purposes, and Cathy Chua for helpful suggestions concerning the metric.","year":2016,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} +{"ID":"digalakis-etal-1990-fast","url":"https:\/\/aclanthology.org\/H90-1037","title":"Fast Search Algorithms for Connected Phone Recognition Using the Stochastic Segment Model","abstract":"In this paper we present methods for reducing the computation time of joint segmentation and recognition of phones using the Stochastic Segment Model (SSM). Our approach to the problem is twofold: first, we present a fast segment classification method that reduces computation by a factor of 2 to 4, depending on the confidence of choosing the most probable model. Second, we propose a Split and Merge segmentation algorithm as an alternative to the typical Dynamic Programming solution of the segmentation and recognition problem, with computation savings increasing proportionally with model complexity. Even though our current recognizer uses context-independent phone models, the results that we report on the TIMIT database for speaker independent joint segmentation and recognition are comparable to that of systems that use context information.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"This work was jointly supported by NSF and DARPA under NSF grant # IRI-8902124.","year":1990,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} +{"ID":"yang-etal-2019-read","url":"https:\/\/aclanthology.org\/D19-1512","title":"Read, Attend and Comment: A Deep Architecture for Automatic News Comment Generation","abstract":"Automatic news comment generation is a new testbed for techniques of natural language generation. In this paper, we propose a \"readattend-comment\" procedure for news comment generation and formalize the procedure with a reading network and a generation network. The reading network comprehends a news article and distills some important points from it, then the generation network creates a comment by attending to the extracted discrete points and the news title. We optimize the model in an end-to-end manner by maximizing a variational lower bound of the true objective using the back-propagation algorithm. Experimental results on two datasets indicate that our model can significantly outperform existing methods in terms of both automatic evaluation and human judgment.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"This work is supported in part by the National Natural Science Foundation of China (Grand Nos. U1636211, 61672081, 61370126), and the National Key R&D Program of China (No. 2016QY04W0802).","year":2019,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} +{"ID":"ke-etal-2019-araml","url":"https:\/\/aclanthology.org\/D19-1436","title":"ARAML: A Stable Adversarial Training Framework for Text Generation","abstract":"Most of the existing generative adversarial networks (GAN) for text generation suffer from the instability of reinforcement learning training algorithms such as policy gradient, leading to unstable performance. To tackle this problem, we propose a novel framework called Adversarial Reward Augmented Maximum Likelihood (ARAML). During adversarial training, the discriminator assigns rewards to samples which are acquired from a stationary distribution near the data rather than the generator's distribution. The generator is optimized with maximum likelihood estimation augmented by the discriminator's rewards instead of policy gradient. Experiments show that our model can outperform state-of-the-art text GANs with a more stable training process.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"This work was supported by the National Science Foundation of China (Grant No. 61936010\/61876096) and the National Key R&D Program of China (Grant No. 2018YFC0830200). We would like to thank THUNUS NExT Joint-Lab for the support.","year":2019,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} +{"ID":"nariyama-2006-pragmatic","url":"https:\/\/aclanthology.org\/W06-3501","title":"Pragmatic information extraction from subject ellipsis in informal English","abstract":"Subject ellipsis is one of the characteristics of informal English. The investigation of subject ellipsis in corpora thus reveals an abundance of pragmatic and extralinguistic information associated with subject ellipsis that enhances natural language understanding. In essence, the presence of subject ellipsis conveys an 'informal' conversation involving 1) an informal 'Topic' as well as familiar\/close 'Participants', 2) specific 'Connotations' that are different from the corresponding full sentences: interruptive (ending discourse coherence), polite, intimate, friendly, and less determinate implicatures. This paper also construes linguistic environments that trigger the use of subject ellipsis and resolve subject ellipsis.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2006,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} +{"ID":"muller-etal-2022-shot","url":"https:\/\/aclanthology.org\/2022.acl-long.584","title":"Few-Shot Learning with Siamese Networks and Label Tuning","abstract":"We study the problem of building text classifiers with little or no training data, commonly known as zero and few-shot text classification. In recent years, an approach based on neural textual entailment models has been found to give strong results on a diverse range of tasks. In this work, we show that with proper pre-training, Siamese Networks that embed texts and labels offer a competitive alternative. These models allow for a large reduction in inference cost: constant in the number of labels rather than linear. Furthermore, we introduce label tuning, a simple and computationally efficient approach that allows to adapt the models in a few-shot setup by only changing the label embeddings. While giving lower performance than model fine-tuning, this approach has the architectural advantage that a single encoder can be shared by many different tasks.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"We would like to thank Francisco Rangel and the entire Symanto Research Team for early discussions, feedback and suggestions. We would also like to thank the anonymous Reviewers. The authors gratefully acknowledge the support of the Pro 2 Haters -Proactive Profiling of Hate Speech Spreaders (CDTi IDI-20210776), XAI-DisInfodemics: eXplainable AI for disinformation and conspiracy detection during infodemics (MICIN PLEC2021-007681), and DETEMP -Early Detection of Depression Detection in Social Media (IVACE IMINOD\/2021\/72) R&D grants.","year":2022,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} +{"ID":"pirinen-2011-modularisation","url":"https:\/\/aclanthology.org\/W11-4644","title":"Modularisation of Finnish Finite-State Language Description -- Towards Wide Collaboration in Open Source Development of a Morphological Analyser","abstract":"In this paper we present an open source implementation for Finnish morphological parser. We shortly evaluate it against contemporary criticism towards monolithic and unmaintainable finite-state language description. We use it to demonstrate way of writing finite-state language description that is used for varying set of projects, that typically need morphological analyser, such as POS tagging, morphological analysis, hyphenation, spell checking and correction, rule-based machine translation and syntactic analysis. The language description is done using available open source methods for building finitestate descriptions coupled with autotoolsstyle build system, which is de facto standard in open source projects.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"We thank Donald Killian for pointing us towards the ongoing discussion about shortcomings of finite-state morphologies and the HFST research group, and our colleagues for fruitful discussions.","year":2011,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} +{"ID":"durgar-el-kahlout-oflazer-2006-initial","url":"https:\/\/aclanthology.org\/W06-3102","title":"Initial Explorations in English to Turkish Statistical Machine Translation","abstract":"This paper presents some very preliminary results for and problems in developing a statistical machine translation system from English to Turkish. Starting with a baseline word model trained from about 20K aligned sentences, we explore various ways of exploiting morphological structure to improve upon the baseline system. As Turkish is a language with complex agglutinative word structures, we experiment with morphologically segmented and disambiguated versions of the parallel texts in order to also uncover relations between morphemes and function words in one language with morphemes and functions words in the other, in addition to relations between open class content words. Morphological segmentation on the Turkish side also conflates the statistics from allomorphs so that sparseness can be alleviated to a certain extent. We find that this approach coupled with a simple grouping of most frequent morphemes and function words on both sides improve the BLEU score from the baseline of 0.0752 to 0.0913 with the small training data. We close with a discussion on why one should not expect distortion parameters to model word-local morpheme ordering and that a new approach to handling complex morphotactics is needed.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"This work was supported by T\u00dcB\u0130TAK (Turkish Scientific and Technological Research Foundation) project 105E020 \"Building a Statistical Machine Translation for Turkish and English\".","year":2006,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} +{"ID":"nerima-etal-2003-creating","url":"https:\/\/aclanthology.org\/E03-1022","title":"Creating a multilingual collocations dictionary from large text corpora","abstract":"This paper describes a system of terminological extraction capable of handling multi-word expressions, using a powerful syntactic parser. The system includes a concordancing tool enabling the user to display the context of the collocation, i.e. the sentence or the whole document where the collocation occurs. Since the corpora are multilingual, the system also offers an alignment mechanism for the corresponding translated documents.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"This work is supported by Geneva International Academic Network (GIAN), research project \"Linguistic Analysis and Collocation Extraction\", approved in 2001. Thanks to Olivier Pasteur for the invaluable help in this research.","year":2003,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} +{"ID":"flati-etal-2014-two","url":"https:\/\/aclanthology.org\/P14-1089","title":"Two Is Bigger (and Better) Than One: the Wikipedia Bitaxonomy Project","abstract":"We present WiBi, an approach to the automatic creation of a bitaxonomy for Wikipedia, that is, an integrated taxonomy of Wikipage pages and categories. We leverage the information available in either one of the taxonomies to reinforce the creation of the other taxonomy. Our experiments show higher quality and coverage than state-of-the-art resources like DBpedia, YAGO, MENTA, WikiNet and WikiTaxonomy. WiBi is available at http:\/\/wibitaxonomy.org.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"The authors gratefully acknowledge the support of the ERC Starting Grant MultiJEDI No. 259234. We thank Luca Telesca for his implementation of WikiTaxonomy and Jim McManus for his comments on the manuscript.","year":2014,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} +{"ID":"liu-etal-2019-knowledge","url":"https:\/\/aclanthology.org\/D19-1187","title":"Knowledge Aware Conversation Generation with Explainable Reasoning over Augmented Graphs","abstract":"Two types of knowledge, triples from knowledge graphs and texts from documents, have been studied for knowledge aware opendomain conversation generation, in which graph paths can narrow down vertex candidates for knowledge selection decision, and texts can provide rich information for response generation. Fusion of a knowledge graph and texts might yield mutually reinforcing advantages, but there is less study on that. To address this challenge, we propose a knowledge aware chatting machine with three components, an augmented knowledge graph with both triples and texts, knowledge selector, and knowledge aware response generator. For knowledge selection on the graph, we formulate it as a problem of multi-hop graph reasoning to effectively capture conversation flow, which is more explainable and flexible in comparison with previous work. To fully leverage long text information that differentiates our graph from others, we improve a state of the art reasoning algorithm with machine reading comprehension technology. We demonstrate the effectiveness of our system on two datasets in comparison with state-of-the-art models 1 .","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"We would like to thank the reviewers for their insightful comments. This work was supported by the Natural Science Foundation of China (No.61533018).","year":2019,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} +{"ID":"sun-etal-2020-helpfulness","url":"https:\/\/aclanthology.org\/2020.coling-main.121","title":"On the Helpfulness of Document Context to Sentence Simplification","abstract":"Most of the research on text simplification is limited to sentence level nowadays. In this paper, we are the first to investigate the helpfulness of document context on sentence simplification and apply it to the sequence-to-sequence model. We firstly construct a sentence simplification dataset in which the contexts for the original sentence are provided by Wikipedia corpus. The new dataset contains approximately 116K sentence pairs with context. We then propose a new model that makes full use of the context information. Our model uses neural networks to learn the different effects of the preceding sentences and the following sentences on the current sentence and applies them to the improved transformer model. Evaluated on the newly constructed dataset, our model achieves 36.52 on SARI value, which outperforms the best performing model in the baselines by 2.46 (7.22%), indicating that context indeed helps improve sentence simplification. In the ablation experiment, we show that using either the preceding sentences or the following sentences as context can significantly improve simplification.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"This work was supported by National Natural Science Foundation of China (61772036), Beijing Academy of Artificial Intelligence (BAAI) and Key Laboratory of Science, Technology and Standard in Press Industry (Key Laboratory of Intelligent Press Media Technology). We appreciate the anonymous reviewers for their helpful comments. Xiaojun Wan is the corresponding author.","year":2020,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} +{"ID":"detrez-ranta-2012-smart","url":"https:\/\/aclanthology.org\/E12-1066","title":"Smart Paradigms and the Predictability and Complexity of Inflectional Morphology","abstract":"Morphological lexica are often implemented on top of morphological paradigms, corresponding to different ways of building the full inflection table of a word. Computationally precise lexica may use hundreds of paradigms, and it can be hard for a lexicographer to choose among them. To automate this task, this paper introduces the notion of a smart paradigm. It is a metaparadigm, which inspects the base form and tries to infer which low-level paradigm applies. If the result is uncertain, more forms are given for discrimination. The number of forms needed in average is a measure of predictability of an inflection system. The overall complexity of the system also has to take into account the code size of the paradigms definition itself. This paper evaluates the smart paradigms implemented in the open-source GF Resource Grammar Library. Predictability and complexity are estimated for four different languages: English, French, Swedish, and Finnish. The main result is that predictability does not decrease when the complexity of morphology grows, which means that smart paradigms provide an efficient tool for the manual construction and\/or automatically bootstrapping of lexica.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"We are grateful to the anonymous referees for valuable remarks and questions. The research leading to these results has received funding from the European Union's Seventh Framework Programme (FP7\/2007-2013) under grant agreement no FP7-ICT-247914 (the MOLTO project).","year":2012,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} +{"ID":"rupp-etal-2008-language","url":"http:\/\/www.lrec-conf.org\/proceedings\/lrec2008\/pdf\/556_paper.pdf","title":"Language Resources and Chemical Informatics","abstract":"Chemistry research papers are a primary source of information about chemistry, as in any scientific field. The presentation of the data is, predominantly, unstructured information, and so not immediately susceptible to processes developed within chemical informatics for carrying out chemistry research by information processing techniques. At one level, extracting the relevant information from research papers is a text mining task, requiring both extensive language resources and specialised knowledge of the subject domain. However, the papers also encode information about the way the research is conducted and the structure of the field itself. Applying language technology to research papers in chemistry can facilitate eScience on several different levels. The SciBorg project sets out to provide an extensive, analysed corpus of published chemistry research. This relies on the cooperation of several journal publishers to provide papers in an appropriate form. The work is carried out as a collaboration involving the","label_nlp4sg":1,"task":[],"method":[],"goal1":"Industry, Innovation and Infrastructure","goal2":null,"goal3":null,"acknowledgments":"We are very grateful to the Royal Society of Chemistry, Nature Publishing Group and the International Union of Crystallography for supplying papers. This work was funded by EPSRC (EP\/C010035\/1) with additional support from Boeing.","year":2008,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":1,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} +{"ID":"mikheev-2002-periods","url":"https:\/\/aclanthology.org\/J02-3002","title":"Periods, Capitalized Words, etc.","abstract":"In this article we present an approach for tackling three important aspects of text normalization: sentence boundary disambiguation, disambiguation of capitalized words in positions where capitalization is expected, and identification of abbreviations. As opposed to the two dominant techniques of computing statistics or writing specialized grammars, our document-centered approach works by considering suggestive local contexts and repetitions of individual words within a document. This approach proved to be robust to domain shifts and new lexica and produced performance on the level with the highest reported results. When incorporated into a part-of-speech tagger, it helped reduce the error rate significantly on capitalized words and sentence boundaries. We also investigated the portability to other languages and obtained encouraging results.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"The work reported in this article was supported in part by grant GR\/L21952 (Text Tokenization Tool) from the Engineering and Physical Sciences Research Council, U.K., and also it benefited from the ongoing efforts in building domain-independent text-processing software at Infogistics Ltd. I am also grateful to one anonymous reviewer who put a lot of effort into making this article as it is now.","year":2002,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} +{"ID":"zhang-etal-2021-de","url":"https:\/\/aclanthology.org\/2021.acl-long.371","title":"De-biasing Distantly Supervised Named Entity Recognition via Causal Intervention","abstract":"Distant supervision tackles the data bottleneck in NER by automatically generating training instances via dictionary matching. Unfortunately, the learning of DS-NER is severely dictionary-biased, which suffers from spurious correlations and therefore undermines the effectiveness and the robustness of the learned models. In this paper, we fundamentally explain the dictionary bias via a Structural Causal Model (SCM), categorize the bias into intra-dictionary and inter-dictionary biases, and identify their causes. Based on the SCM, we learn de-biased DS-NER via causal interventions. For intra-dictionary bias, we conduct backdoor adjustment to remove the spurious correlations introduced by the dictionary confounder. For inter-dictionary bias, we propose a causal invariance regularizer which will make DS-NER models more robust to the perturbation of dictionaries. Experiments on four datasets and three DS-NER models show that our method can significantly improve the performance of DS-NER.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"This work is supported by the National Natural Science Foundation of China under Grants no.U1936207, Beijing Academy of Artificial Intelligence (BAAI2019QN0502), scientific research projects of the State Language Commission (YW135-78), and in part by the Youth Innovation Promotion Association CAS(2018141). Moreover, we thank all reviewers for their valuable comments and suggestions.","year":2021,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} +{"ID":"das-etal-2021-emotion","url":"https:\/\/aclanthology.org\/2021.naacl-srw.19","title":"Emotion Classification in a Resource Constrained Language Using Transformer-based Approach","abstract":"Although research on emotion classification has significantly progressed in highresource languages, it is still infancy for resource-constrained languages like Bengali. However, unavailability of necessary language processing tools and deficiency of benchmark corpora makes the emotion classification task in Bengali more challenging and complicated. This work proposes a transformer-based technique to classify the Bengali text into one of the six basic emotions: anger, fear, disgust, sadness, joy, and surprise. A Bengali emotion corpus consists of 6243 texts is developed for the classification task. Experimentation carried out using various machine learning (LR, RF, MNB, SVM), deep neural networks (CNN, BiLSTM, CNN+BiLSTM) and transformer (Bangla-BERT, m-BERT, XLM-R) based approaches. Experimental outcomes indicate that XLM-R outdoes all other techniques by achieving the highest weighted f 1-score of 69.73% on the test data. The dataset is publicly available at https:\/\/github.com\/omar-sharif03\/ NAACL-SRW-2021.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"We sincerely acknowledge the anonymous reviewers and pre-submission mentor for their insightful suggestions, which help improve the work. This work was supported by the Directorate of Research & Extension, CUET.","year":2021,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} +{"ID":"wu-etal-2021-code","url":"https:\/\/aclanthology.org\/2021.findings-acl.93","title":"Code Summarization with Structure-induced Transformer","abstract":"Code summarization (CS) is becoming a promising area in recent language understanding, which aims to generate sensible human language automatically for programming language in the format of source code, serving in the most convenience of programmer developing. It is well known that programming languages are highly structured. Thus previous works attempt to apply structurebased traversal (SBT) or non-sequential models like Tree-LSTM and graph neural network (GNN) to learn structural program semantics. However, it is surprising that incorporating SBT into advanced encoder like Transformer instead of LSTM has been shown no performance gain, which lets GNN become the only rest means modeling such necessary structural clue in source code. To release such inconvenience, we propose structureinduced Transformer, which encodes sequential code inputs with multi-view structural clues in terms of a newly-proposed structureinduced self-attention mechanism. Extensive experiments show that our proposed structureinduced Transformer helps achieve new stateof-the-art results on benchmarks.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2021,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} +{"ID":"zhang-fung-2007-speech","url":"https:\/\/aclanthology.org\/N07-2054","title":"Speech Summarization Without Lexical Features for Mandarin Broadcast News","abstract":"We present the first known empirical study on speech summarization without lexical features for Mandarin broadcast news. We evaluate acoustic, lexical and structural features as predictors of summary sentences. We find that the summarizer yields good performance at the average Fmeasure of 0.5646 even by using the combination of acoustic and structural features alone, which are independent of lexical features. In addition, we show that structural features are superior to lexical features and our summarizer performs surprisingly well at the average F-measure of 0.3914 by using only acoustic features. These findings enable us to summarize speech without placing a stringent demand on speech recognition accuracy.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2007,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} +{"ID":"chen-di-eugenio-2013-multimodality","url":"https:\/\/aclanthology.org\/W13-4031","title":"Multimodality and Dialogue Act Classification in the RoboHelper Project","abstract":"We describe the annotation of a multimodal corpus that includes pointing gestures and haptic actions (force exchanges). Haptic actions are rarely analyzed as fullfledged components of dialogue, but our data shows haptic actions are used to advance the state of the interaction. We report our experiments on recognizing Dialogue Acts in both offline and online modes. Our results show that multimodal features and the dialogue game aid in DA classification.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"This work is supported by award IIS 0905593 from the National Science Foundation. Thanks to the other members of the RoboHelper project, for their many contributions, especially to the data collection effort.","year":2013,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} +{"ID":"nakano-etal-2011-two","url":"https:\/\/aclanthology.org\/W11-2004","title":"A Two-Stage Domain Selection Framework for Extensible Multi-Domain Spoken Dialogue Systems","abstract":"This paper describes a general and effective domain selection framework for multi-domain spoken dialogue systems that employ distributed domain experts. The framework consists of two processes: deciding if the current domain continues and estimating the probabilities for selecting other domains. If the current domain does not continue, the domain with the highest activation probability is selected. Since those processes for each domain expert can be designed independently from other experts and can use a large variety of information, the framework achieves both extensibility and robustness against speech recognition errors. The results of an experiment using a corpus of dialogues between humans and a multi-domain dialogue system demonstrate the viability of the proposed framework.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"The authors would like to thank Hiroshi Tsujino, Yuji Hasegawa, and Hiromi Narimatsu for their support for this research.","year":2011,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} +{"ID":"jung-shim-2020-dual","url":"https:\/\/aclanthology.org\/2020.coling-main.564","title":"Dual Supervision Framework for Relation Extraction with Distant Supervision and Human Annotation","abstract":"Relation extraction (RE) has been extensively studied due to its importance in real-world applications such as knowledge base construction and question answering. Most of the existing works train the models on either distantly supervised data or human-annotated data. To take advantage of the high accuracy of human annotation and the cheap cost of distant supervision, we propose the dual supervision framework which effectively utilizes both types of data. However, simply combining the two types of data to train a RE model may decrease the prediction accuracy since distant supervision has labeling bias. We employ two separate prediction networks HA-Net and DS-Net to predict the labels by human annotation and distant supervision, respectively, to prevent the degradation of accuracy by the incorrect labeling of distant supervision. Furthermore, we propose an additional loss term called disagreement penalty to enable HA-Net to learn from distantly supervised labels. In addition, we exploit additional networks to adaptively assess the labeling bias by considering contextual information. Our performance study on sentence-level and document-level REs confirms the effectiveness of the dual supervision framework.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"This work was supported by Next-Generation Information Computing Development Program through the National Research Foundation of Korea(NRF) funded by the Ministry of Science, ICT (No. NRF-2017M3C4A7063570) and was also supported by Institute of Information & communications Technology Planning & Evaluation(IITP) grant funded by the Korea government(MSIT) (No. 2020-0-00857, Development of cloud robot intelligence augmentation, sharing and framework technology to integrate and enhance the intelligence of multiple robots). This research was results of a study on the \"HPC Support\" Project, supported by the Ministry of Science and ICT and NIPA.","year":2020,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} +{"ID":"zhen-etal-2021-chinese","url":"https:\/\/aclanthology.org\/2021.emnlp-main.796","title":"Chinese Opinion Role Labeling with Corpus Translation: A Pivot Study","abstract":"Opinion Role Labeling (ORL), aiming to identify the key roles of opinion, has received increasing interest. Unlike most of the previous works focusing on the English language, in this paper, we present the first work of Chinese ORL. We construct a Chinese dataset by manually translating and projecting annotations from a standard English MPQA dataset. Then, we investigate the effectiveness of cross-lingual transfer methods, including model transfer and corpus translation. We exploit multilingual BERT with Contextual Parameter Generator and Adapter methods to examine the potentials of unsupervised crosslingual learning and our experiments and analyses for both bilingual and multilingual transfers establish a foundation for the future research of this task 1 .","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"We thank all reviewers for their helpful comments. This work was supported by National Natural Science Foundation of China under grants 62076173 and 61672211.","year":2021,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} +{"ID":"su-chang-1988-semantic","url":"https:\/\/aclanthology.org\/C88-2133","title":"Semantic and Syntactic Aspects of Score Function","abstract":"In a Machine Translation System (MTS), the number of possible analyses for a given sentence is largely dve to the ambiguous characteristics of the source language.\nIn this paper, a mechanism, called \"Score Function\", is proposed for measuring the \"quality\" of the ambiguous syntax trees such that the one that best fits interpretation by human is selected.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"We would like to express our deepest appreciation to Wen-t%~eh Li and Hsue-Hueh Hsu for their work on the simulations, to the whole linguistic group at BTC R&I) center for their work on the database, and Mei-Hui Su for her editing.Special thanks are given to Behavior Tech. Computer Co. for their full financial support of this project.","year":1988,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} +{"ID":"cresti-etal-2004-c","url":"http:\/\/www.lrec-conf.org\/proceedings\/lrec2004\/pdf\/357.pdf","title":"The C-ORAL-ROM CORPUS. A Multilingual Resource of Spontaneous Speech for Romance Languages","abstract":"The CORAL -ROM project has delivered a multilingual corpus of spontaneous speech for the main romance languages (Italian, French, Portuguese and Spanish). The collection aims to represent the variety of speech acts performed in everyday language and to enable the description of prosodic and syntactic structures in the four romance languages. Sampling criteria are defined in a corpus design scheme. CORAL -ROM adopts two different sampling strategies, one for the formal and one for the informal part: While a set of typical domains of application is selected to document the formal use of language, the informal part documents speech variation using parameters referring to the event's structure (dialogue vs. monologue) and the sociological domain of use (family-private vs public). The four romance corpora are tagged with respect to terminal and non terminal prosodic breaks. Terminal breaks are assumed to be the more relevant cues for the identification of relevant linguistic domains in spontaneous speech (utterances). Relations with other concurrent criteria are discussed. The multimedia storage of the CORAL -ROM corpus is based on this principle; each textual string ending with a terminal break is aligned, through the Win Pitch speech software, to its acoustic counterpart, generating the data base of all utterances.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2004,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} +{"ID":"dubremetz-nivre-2014-extraction","url":"https:\/\/aclanthology.org\/W14-0812","title":"Extraction of Nominal Multiword Expressions in French","abstract":"Multiword expressions (MWEs) can be extracted automatically from large corpora using association measures, and tools like mwetoolkit allow researchers to generate training data for MWE extraction given a tagged corpus and a lexicon. We use mwetoolkit on a sample of the French Europarl corpus together with the French lexicon Dela, and use Weka to train classifiers for MWE extraction on the generated training data. A manual evaluation shows that the classifiers achieve 60-75% precision and that about half of the MWEs found are novel and not listed in the lexicon. We also investigate the impact of the patterns used to generate the training data and find that this can affect the trade-off between precision and novelty.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2014,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} +{"ID":"snider-diab-2006-unsupervised","url":"https:\/\/aclanthology.org\/N06-2039","title":"Unsupervised Induction of Modern Standard Arabic Verb Classes","abstract":"We exploit the resources in the Arabic Treebank (ATB) for the novel task of automatically creating lexical semantic verb classes for Modern Standard Arabic (MSA). Verbs are clustered into groups that share semantic elements of meaning as they exhibit similar syntactic behavior. The results of the clustering experiments are compared with a gold standard set of classes, which is approximated by using the noisy English translations provided in the ATB to create Levin-like classes for MSA. The quality of the clusters is found to be sensitive to the inclusion of information about lexical heads of the constituents in the syntactic frames, as well as parameters of the clustering algorithm. The best set of parameters yields an F \u03b2=1 score of 0.501, compared to a random baseline with an F \u03b2=1 score of 0.37.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2006,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} +{"ID":"boitet-1989-motivations","url":"https:\/\/aclanthology.org\/1989.mtsummit-1.30","title":"Motivations, aims and architecture of the LIDIA project","abstract":"At the first Machine Translation Summit in Hakone, 2 years ago, I had been asked to present the research directions envisaged at GETA (Groupe d'Etude pour la Traduction Automatique). At that time, we were just emerging from a 3-year effort of technological transfer (CALLIOPE), and considering many directions for future work. Very soon afterwards came the time to choose between all open possibilities.\nBesides 3 main research themes (\"static\" grammars, lexical data bases and software problem linked with multilinguality), we have recently embarked on the LIDIA project to crystallize the efforts of the team. It may be interesting here to explain briefly the motivations, the aims, and the oven architecture of this project.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":1989,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} +{"ID":"chia-etal-2022-come","url":"https:\/\/aclanthology.org\/2022.ecnlp-1.22","title":"``Does it come in black?'' CLIP-like models are zero-shot recommenders","abstract":"Product discovery is a crucial component for online shopping. However, item-to-item recommendations today do not allow users to explore changes along selected dimensions: given a query item, can a model suggest something similar but in a different color? We consider item recommendations of the comparative nature (e.g. \"something darker\") and show how CLIP-based models can support this use case in a zero-shot manner. Leveraging a large model built for fashion, we introduce GradREC and its industry potential, and offer a first rounded assessment of its strength and weaknesses. * * GradRECS started as a (failed) experiment by JT; PC actually made it work, and he is the lead researcher on the project. FB, CG and DC all contributed to the paper, providing support for modelling, industry context and domain knowledge. PC and JT are the corresponding authors.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2022,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} +{"ID":"wang-hirschberg-1991-predicting","url":"https:\/\/aclanthology.org\/H91-1074","title":"Predicting Intonational Boundaries Automatically from Text: The ATIS Domain","abstract":"Relating the intonational characteristics of an utterance to other features inferable from its text is important both for speech recognition and for speech synthesis. This work investigates techniques for predicting the location of intonational phrase boundaries in natural speech, through analyzing a utterances from the DARPA Air Travel Information Service database. For statistical modeling, we employ Classification and Regression Tree (CART) techniques. We achieve success rates of just over 90%.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":1991,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} +{"ID":"ilichev-etal-2021-multiple","url":"https:\/\/aclanthology.org\/2021.ranlp-1.68","title":"Multiple Teacher Distillation for Robust and Greener Models","abstract":"The language models nowadays are in the center of natural language processing progress. These models are mostly of significant size. There are successful attempts to reduce them, but at least some of these attempts rely on randomness. We propose a novel distillation procedure leveraging on multiple teachers usage which alleviates random seed dependency and makes the models more robust. We show that this procedure applied to TinyBERT and Dis-tilBERT models improves their worst case results up to 2% while keeping almost the same best-case ones. The latter fact keeps true with a constraint on computational time, which is important to lessen the carbon footprint. In addition, we present the results of an application of the proposed procedure to a computer vision model ResNet, which shows that the statement keeps true in this totally different domain.","label_nlp4sg":1,"task":[],"method":[],"goal1":"Responsible Consumption and Production","goal2":null,"goal3":null,"acknowledgments":null,"year":2021,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":1,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} +{"ID":"lee-etal-2020-discrepancy","url":"https:\/\/aclanthology.org\/2020.spnlp-1.10","title":"On the Discrepancy between Density Estimation and Sequence Generation","abstract":"Many sequence-to-sequence generation tasks, including machine translation and text-tospeech, can be posed as estimating the density of the output y given the input x: p(y|x). Given this interpretation, it is natural to evaluate sequence-to-sequence models using conditional log-likelihood on a test set. However, the goal of sequence-to-sequence generation (or structured prediction) is to find the best output\u0177 given an input x, and each task has its own downstream metric R that scores a model output by comparing against a set of references y * : R(\u0177, y * |x). While we hope that a model that excels in density estimation also performs well on the downstream metric, the exact correlation has not been studied for sequence generation tasks. In this paper, by comparing several density estimators on five machine translation tasks, we find that the correlation between rankings of models based on log-likelihood and BLEU varies significantly depending on the range of the model families being compared. First, log-likelihood is highly correlated with BLEU when we consider models within the same family (e.g. autoregressive models, or latent variable models with the same parameterization of the prior). However, we observe no correlation between rankings of models across different families: (1) among non-autoregressive latent variable models, a flexible prior distribution is better at density estimation but gives worse generation quality than a simple prior, and (2) autoregressive models offer the best translation performance overall, while latent variable models with a normalizing flow prior give the highest held-out log-likelihood across all datasets.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"We thank our colleagues at the Google Translate and Brain teams, particularly Durk Kingma, Yu Zhang, Yuan Cao and Julia Kreutzer for their feedback on the draft. JL thanks Chunting Zhou, Manoj Kumar and William Chan for helpful discussions.KC is supported by Samsung Advanced Institute of Technology (Next Generation Deep Learning: from pattern recognition to AI), Samsung Research (Improving Deep Learning using Latent Structure) and NSF Award 1922658 NRT-HDR: FUTURE Foundations, Translation, and Responsibility for Data Science. KC thanks CIFAR, eBay, Naver and NVIDIA for their support.","year":2020,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} +{"ID":"huang-xiang-2010-feature","url":"https:\/\/aclanthology.org\/C10-1056","title":"Feature-Rich Discriminative Phrase Rescoring for SMT","abstract":"This paper proposes a new approach to phrase rescoring for statistical machine translation (SMT). A set of novel features capturing the translingual equivalence between a source and a target phrase pair are introduced. These features are combined with linear regression model and neural network to predict the quality score of the phrase translation pair. These phrase scores are used to discriminatively rescore the baseline MT system's phrase library: boost good phrase translations while prune bad ones. This approach not only significantly improves machine translation quality, but also reduces the model size by a considerable margin.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2010,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} +{"ID":"yano-etal-2010-shedding","url":"https:\/\/aclanthology.org\/W10-0723","title":"Shedding (a Thousand Points of) Light on Biased Language","abstract":"This paper considers the linguistic indicators of bias in political text. We used Amazon Mechanical Turk judgments about sentences from American political blogs, asking annotators to indicate whether a sentence showed bias, and if so, in which political direction and through which word tokens. We also asked annotators questions about their own political views. We conducted a preliminary analysis of the data, exploring how different groups perceive bias in different blogs, and showing some lexical indicators strongly associated with perceived bias.","label_nlp4sg":1,"task":[],"method":[],"goal1":"Peace, Justice and Strong Institutions","goal2":null,"goal3":null,"acknowledgments":"The authors acknowledge research support from HP Labs, help with data from Jacob Eisenstein, and helpful comments from the reviewers, Olivia Buzek, Michael Heilman, and Brendan O'Connor.","year":2010,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":1,"sdg17":0} +{"ID":"barnes-etal-2019-lexicon","url":"https:\/\/aclanthology.org\/W19-6119","title":"Lexicon information in neural sentiment analysis: a multi-task learning approach","abstract":"This paper explores the use of multi-task learning (MTL) for incorporating external knowledge in neural models. Specifically, we show how MTL can enable a BiLSTM sentiment classifier to incorporate information from sentiment lexicons. Our MTL setup is shown to improve model performance (compared to a single-task setup) on both English and Norwegian sentence-level sentiment datasets. The paper also introduces a new sentiment lexicon for Norwegian.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"This work has been carried out as part of the SANT project (Sentiment Analysis for Norwegian Text), funded by the Research Council of Norway (grant number 270908).","year":2019,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} +{"ID":"walker-etal-1997-paradise","url":"https:\/\/aclanthology.org\/P97-1035","title":"PARADISE: A Framework for Evaluating Spoken Dialogue Agents","abstract":"This paper presents PARADISE (PARAdigm for Dialogue System Evaluation), a general framework for evaluating spoken dialogue agents. The framework decouples task requirements from an agent's dialogue behaviors, supports comparisons among dialogue strategies, enables the calculation of performance over subdialogues and whole dialogues, specifies the relative contribution of various factors to performance, and makes it possible to compare agents performing different tasks by normalizing for task complexity.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"We would like to thank James Allen, Jennifer Chu-Carroll, Morena Danieli, Wieland Eckert, Giuseppe Di Fabbrizio, Don Hindle, Julia Hirschberg, Shri Narayanan, Jay Wilpon, Steve Whittaker and three anonymous reviews for helpful discussion and comments on earlier versions of this paper.","year":1997,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} +{"ID":"hill-korhonen-2014-concreteness","url":"https:\/\/aclanthology.org\/P14-2118","title":"Concreteness and Subjectivity as Dimensions of Lexical Meaning","abstract":"We quantify the lexical subjectivity of adjectives using a corpus-based method, and show for the first time that it correlates with noun concreteness in large corpora. These cognitive dimensions together influence how word meanings combine, and we exploit this fact to achieve performance improvements on the semantic classification of adjective-noun pairs.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"The authors are supported by St John's College, Cambridge and The Royal Society.","year":2014,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} +{"ID":"dutta-etal-2020-uds","url":"https:\/\/aclanthology.org\/2020.wmt-1.129","title":"UdS-DFKI@WMT20: Unsupervised MT and Very Low Resource Supervised MT for German-Upper Sorbian","abstract":"This paper describes the UdS-DFKI submission to the shared task for unsupervised machine translation (MT) and very low-resource supervised MT between German (de) and Upper Sorbian (hsb) at the Fifth Conference of Machine Translation (WMT20). We submit systems for both the supervised and unsupervised tracks. Apart from various experimental approaches like bitext mining, model pretraining, and iterative back-translation, we employ a factored machine translation approach on a small BPE vocabulary.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"The authors thank the German Research Center for Artificial Intelligence (DFKI GmbH) for pro-","year":2020,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} +{"ID":"zhang-etal-2012-automatically","url":"http:\/\/www.lrec-conf.org\/proceedings\/lrec2012\/pdf\/244_Paper.pdf","title":"Automatically Extracting Procedural Knowledge from Instructional Texts using Natural Language Processing","abstract":"Procedural knowledge is the knowledge required to perform certain tasks, and forms an important part of expertise. A major source of procedural knowledge is natural language instructions. While these readable instructions have been useful learning resources for human, they are not interpretable by machines. Automatically acquiring procedural knowledge in machine interpretable formats from instructions has become an increasingly popular research topic due to their potential applications in process automation. However, it has been insufficiently addressed. This paper presents an approach and an implemented system to assist users to automatically acquire procedural knowledge in structured forms from instructions. We introduce a generic semantic representation of procedures for analysing instructions, using which natural language techniques are applied to automatically extract structured procedures from instructions. The method is evaluated in three domains to justify the generality of the proposed semantic representation as well as the effectiveness of the implemented automatic system.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2012,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} +{"ID":"rosa-etal-2017-slavic","url":"https:\/\/aclanthology.org\/W17-1226","title":"Slavic Forest, Norwegian Wood","abstract":"D We once had a corp, or should we say, C it once had D us D They showed us its tags, isn't it great, C unified D tags Dmi They asked us to parse and they told us to use G everything Dmi So we looked around and we noticed there was near Em nothing AA7 We took other langs, bitext aligned: words one-to-one We played for two weeks, and then they said, here is the test The parser kept training till morning, just until deadline So we had to wait and hope what we get would be just fine And, when we awoke, the results were done, we saw we'd won So, we wrote this paper, isn't it good, Norwegian wood.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"The work was supported by the grant 15-10472S of the Czech Science Foundation, SVV grant of Charles University, and by the EU project H2020-ICT-2014-1-644402. This work has been using language resources and tools developed, stored and distributed by the LINDAT\/CLARIN project of the Ministry of Education, Youth and Sports of the Czech Republic (project LM2015071).","year":2017,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} +{"ID":"gregory-altun-2004-using","url":"https:\/\/aclanthology.org\/P04-1086","title":"Using Conditional Random Fields to Predict Pitch Accents in Conversational Speech","abstract":"The detection of prosodic characteristics is an important aspect of both speech synthesis and speech recognition. Correct placement of pitch accents aids in more natural sounding speech, while automatic detection of accents can contribute to better wordlevel recognition and better textual understanding. In this paper we investigate probabilistic, contextual, and phonological factors that influence pitch accent placement in natural, conversational speech in a sequence labeling setting. We introduce Conditional Random Fields (CRFs) to pitch accent prediction task in order to incorporate these factors efficiently in a sequence model. We demonstrate the usefulness and the incremental effect of these factors in a sequence model by performing experiments on hand labeled data from the Switchboard Corpus. Our model outperforms the baseline and previous models of pitch accent prediction on the Switchboard Corpus.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"This work was partially funded by CAREER award #IIS 9733067 IGERT. We would also like to thank Mark Johnson for the idea of this project, Dan Jurafsky, Alan Bell, Cynthia Girand, and Jason Brenier for their helpful comments and help with the database.","year":2004,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} +{"ID":"arnold-etal-2017-counterfactual","url":"https:\/\/aclanthology.org\/I17-2009","title":"Counterfactual Language Model Adaptation for Suggesting Phrases","abstract":"Mobile devices use language models to suggest words and phrases for use in text entry. Traditional language models are based on contextual word frequency in a static corpus of text. However, certain types of phrases, when offered to writers as suggestions, may be systematically chosen more often than their frequency would predict. In this paper, we propose the task of generating suggestions that writers accept, a related but distinct task to making accurate predictions. Although this task is fundamentally interactive, we propose a counterfactual setting that permits offline training and evaluation. We find that even a simple language model can capture text characteristics that improve acceptability.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"Acknowledgements Kai-Wei Chang was supported in part by National Science Foundation Grant IIS-1657193. Part of the work was done while Kai-Wei Chang and Kenneth C. Arnold visited Microsoft Research, Cambridge.","year":2017,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} +{"ID":"sutcliffe-kurohashi-2000-parallel","url":"http:\/\/www.lrec-conf.org\/proceedings\/lrec2000\/pdf\/248.pdf","title":"A Parallel English-Japanese Query Collection for the Evaluation of On-Line Help Systems","abstract":"An experiment concerning the creation of parallel evaluation data for information retrieval is presented. A set of English queries was gathered for the domain of wordprocessing using Lotus Ami Pro. A set of Japanese queries was then created from these. The answers to the queries were elicited from eight respondents comprising four native speakers of each language. We first describe how the queries were created and the answers elicited. We then present analyses of the responses in each language. The results show a lower level of agreeement between respondents than was expected. We discuss a refinement of the elicitation process which is designed to address this problem as well as measuring the integrity of individual respondents.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2000,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} +{"ID":"bernard-danlos-2016-modelling","url":"https:\/\/aclanthology.org\/W16-3304","title":"Modelling Discourse in STAG: Subordinate Conjunctions and Attributing Phrases","abstract":"We propose a new model in STAG syntax and semantics for subordinate conjunctions (SubConjs) and attributing phrases-attitude\/reporting verbs (AVs; believe, say) and attributing prepositional phrase (APPs; according to). This model is discourse-oriented, and is based on the observation that SubConjs and AVs are not homogeneous categories. Indeed, previous work has shown that SubConjs can be divided into two classes according to their syntactic and semantic properties. Similarly, AVs have two different uses in discourse: evidential and intentional. While evidential AVs and APPs have strong semantic similarities, they do not appear in the same contexts when SubConjs are at play. Our proposition aims at representing these distinctions and capturing these various discourse-related interactions.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2016,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} +{"ID":"chakraborty-etal-2011-semantic","url":"https:\/\/aclanthology.org\/W11-0803","title":"Semantic Clustering: an Attempt to Identify Multiword Expressions in Bengali","abstract":"One of the key issues in both natural language understanding and generation is the appropriate processing of Multiword Expressions (MWEs). MWE can be defined as a semantic issue of a phrase where the meaning of the phrase may not be obtained from its constituents in a straightforward manner. This paper presents an approach of identifying bigram noun-noun MWEs from a medium-size Bengali corpus by clustering the semantically related nouns and incorporating a vector space model for similarity measurement. Additional inclusion of the English WordNet::Similarity module also improves the results considerably. The present approach also contributes to locate clusters of the synonymous noun words present in a document. Experimental results draw a satisfactory conclusion after analyzing the Precision, Recall and F-score values.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"The work reported in this paper is supported by a grant from the \"Indian Language to Indian Language Machine Translation (IL-ILMT) System Phrase II\", funded by Department of Information and Technology (DIT), Govt. of India.","year":2011,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} +{"ID":"nakatani-1991-resolving","url":"https:\/\/aclanthology.org\/P91-1053","title":"Resolving a Pragmatic Prepositional Phrase Attachment Ambiguity","abstract":"To resolve or not to resolve, that is the structural ambiguity dilemma. The traditional wisdom is to disambiguate only when it matters in terms of the meaning of the utterance, and to do so using the computationally least costly information. NLP work on PP-attachment has followed this wisdom, and much effort has been focused on formulating structural and lexical strategies for resolving noun-phrase and verb-phrase (NP-PP vs. VP-PP) attachment ambiguity (e.g. [8, 11] ). In one study, statistical analysis of the distribution of lexical items in a very large text yielded 78% correct parses while two humans achieved just 85% [5] . The close performance of machine and human led the authors to pose two issues that will be addressed in this paper: is the predictive power of distributional data due to \"a complementation relation, a modification relation, or something else\", and what characterizes the attachments that escape prediction?","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"The author thanks Barbara Grosz and Julia Hirschberg, who both advised this research, for valuable comments and guidance; and acknowledges current support from a National Science Foundation Graduate Fellowship. This paper stems from research carried out at Harvard University and at AT&T Bell Laboratories.","year":1991,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} +{"ID":"jabrayilzade-tekir-2020-lgpsolver","url":"https:\/\/aclanthology.org\/2020.findings-emnlp.100","title":"LGPSolver - Solving Logic Grid Puzzles Automatically","abstract":"Logic grid puzzle (LGP) is a type of word problem where the task is to solve a problem in logic. Constraints for the problem are given in the form of textual clues. Once these clues are transformed into formal logic, a deductive reasoning process provides the solution. Solving logic grid puzzles in a fully automatic manner has been a challenge since a precise understanding of clues is necessary to develop the corresponding formal logic representation. To meet this challenge, we propose a solution that uses a DistilBERT-based classifier to classify a clue into one of the predefined predicate types for logic grid puzzles. Another novelty of the proposed solution is the recognition of comparison structures in clues. By collecting comparative adjectives from existing dictionaries and utilizing a semantic framework to catch comparative quantifiers, the semantics of clues concerning comparison structures are better understood, ensuring conversion to correct logic representation. Our approach solves logic grid puzzles in a fully automated manner with 100% accuracy on the given puzzle datasets and outperforms state-of-the-art solutions by a large margin.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"We thank Tugkan Tuglular for his helpful suggestions on an earlier version of this paper.We also thank anonymous reviewers for their valuable comments.","year":2020,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} +{"ID":"cao-etal-2020-balanced","url":"https:\/\/aclanthology.org\/2020.coling-main.432","title":"Balanced Joint Adversarial Training for Robust Intent Detection and Slot Filling","abstract":"Joint intent detection and slot filling has recently achieved tremendous success in advancing the performance of utterance understanding. However, many joint models still suffer from the robustness problem, especially on noisy inputs or rare\/unseen events. To address this issue, we propose a Joint Adversarial Training (JAT) model to improve the robustness of joint intent detection and slot filling, which consists of two parts: (1) automatically generating joint adversarial examples to attack the joint model, and (2) training the model to defend against the joint adversarial examples so as to robustify the model on small perturbations. As the generated joint adversarial examples have different impacts on the intent detection and slot filling loss, we further propose a Balanced Joint Adversarial Training (BJAT) model that applies a balance factor as a regularization term to the final loss function, which yields a stable training procedure. Extensive experiments and analyses on the lightweight models show that our proposed methods achieve significantly higher scores and substantially improve the robustness of both intent detection and slot filling. In addition, the combination of our BJAT with BERT-large achieves state-of-the-art results on two datasets.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"This work is supported by the National Key R&D Program of China (2019YFB1406302), National Natural Science Foundation of China (No. 61502033, 61472034, 61772071, 61272361 and 61672098) and the Fundamental Research Funds for the Central Universities.","year":2020,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} +{"ID":"tannier-moriceau-2010-fidji","url":"http:\/\/www.lrec-conf.org\/proceedings\/lrec2010\/pdf\/68_Paper.pdf","title":"FIDJI: Web Question-Answering at Quaero 2009","abstract":"This paper presents the participation of FIDJI system to the Web Question-Answering evaluation campaign organized by Quaero in 2009. FIDJI is an open-domain question-answering system which combines syntactic information with traditional QA techniques such as named entity recognition and term weighting in order to validate answers through multiple documents. It was originally designed to process \"clean\" document collections. Overall results are significantly lower than in traditional campaigns but results (for French evaluation) are quite good compared to other state-of-the-art systems. They show that a syntax-based strategy, applied on uncleaned Web data, can still obtain good results. Moreover, we obtain much higher scores on \"complex\" questions, i.e. 'how' and 'why' questions, which are more representative of real user needs. These results show that questioning the Web with advanced linguistic techniques can be done without heavy pre-processing and with results that come near to best systems that use strong resources and large structured indexes.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"This work has been partially financed by OSEO under the Quaero program.","year":2010,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} +{"ID":"wu-hsieh-2010-pycwn","url":"https:\/\/aclanthology.org\/C10-3002","title":"PyCWN: a Python Module for Chinese Wordnet","abstract":"This presentation introduces a Python module (PyCWN) for accessing and processing Chinese lexical resources. In particular, our focus is put on the Chinese Wordnet (CWN) that has been developed and released by CWN group at Academia Sinica. PyCWN provides the access to Chinese Wordnet (sense and relation data) under the Python environment. The presenation further demonstrates how this module applies to a variety of lexical processing tasks as well as the potentials for multilingual lexical processing.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2010,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} +{"ID":"nakamura-2007-two","url":"https:\/\/aclanthology.org\/Y07-1035","title":"Two Types of Complex Predicate Formation: Japanese Passive and Potential Verbs","abstract":"This paper deals with the complex verb formation of passive and potential predicates and syntactic structures projected by these verbs. Though both predicates are formed with the suffix-rare which has been assumed to originate from the same stem, they show significantly different syntactic behaviors. We propose two kinds of concatenation of base verbs and auxiliaries; passive verbs are lexically formed with the most restrictive mode of combination, while potential verbs are formed syntactically via more flexible combinatory operations of function composition. The difference in the mode of complex verb formation has significant consequences for their syntactic structures and semantic interpretations, including different combination with the honorific morphemes and subjectivization of arguments\/adjuncts of base verbs. We also consider the case alternation phenomena and their implications for scope construals found in potential sentences, which can be accounted for in a unified manner in terms of the optional application of function composition.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2007,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} +{"ID":"sigurd-gawronska-1994-modals","url":"https:\/\/aclanthology.org\/C94-1018","title":"Modals as a Problem for MT","abstract":"Tim paper demonstrates tim problem of translating modal verbs and phrases and shows how some of these problems can be overcome by choosing semantic representations which look like representations of passive verbs. These semantic representations suit alternative ways of expressing modality by e.g. passive constructions, adverbs and impersonal constructions in the target language. Various restructuring rules for English, Swe(lish and Russian am presented.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":1994,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} +{"ID":"beckley-2015-bekli","url":"https:\/\/aclanthology.org\/W15-4312","title":"Bekli:A Simple Approach to Twitter Text Normalization.","abstract":"Every day, Twitter users generate vast quantities of potentially useful information in the form of written language. Due to Twitter's frequently informal tone, text normalization can be a crucial element for exploiting that information. This paper outlines our approach to text normalization used in the WNUT shared task. We show that a very simple solution, powered by a modestly sized, partiallycurated wordlist-combined with a modest reranking scheme-can deliver respectable results.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2015,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} +{"ID":"chalkidis-etal-2021-paragraph","url":"https:\/\/aclanthology.org\/2021.naacl-main.22","title":"Paragraph-level Rationale Extraction through Regularization: A case study on European Court of Human Rights Cases","abstract":"Interpretability or explainability is an emerging research field in NLP. From a user-centric point of view, the goal is to build models that provide proper justification for their decisions, similar to those of humans, by requiring the models to satisfy additional constraints. To this end, we introduce a new application on legal text where, contrary to mainstream literature targeting word-level rationales, we conceive rationales as selected paragraphs in multi-paragraph structured court cases. We also release a new dataset comprising European Court of Human Rights cases, including annotations for paragraph-level rationales. We use this dataset to study the effect of already proposed rationale constraints, i.e., sparsity, continuity, and comprehensiveness, formulated as regularizers. Our findings indicate that some of these constraints are not beneficial in paragraph-level rationale extraction, while others need re-formulation to better handle the multi-label nature of the task we consider. We also introduce a new constraint, singularity, which further improves the quality of rationales, even compared with noisy rationale supervision. Experimental results indicate that the newly introduced task is very challenging and there is a large scope for further research.","label_nlp4sg":1,"task":[],"method":[],"goal1":"Peace, Justice and Strong Institutions","goal2":null,"goal3":null,"acknowledgments":"We would like to thank the anonymous reviewers (esp. reviewer #2) for their constructive detailed comments. Nikolaos Aletras is supported by EP-SRC grant EP\/V055712\/1, part of the European Commission CHIST-ERA programme, call 2019 XAI: Explainable Machine Learning-based Artificial Intelligence.","year":2021,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":1,"sdg17":0} +{"ID":"benko-2016-two","url":"https:\/\/aclanthology.org\/L16-1672","title":"Two Years of Aranea: Increasing Counts and Tuning the Pipeline","abstract":"The Aranea Project is targeted at creation of a family of Gigaword web-corpora for a dozen of languages that could be used for teaching language-and linguistics-related subjects at Slovak universities, as well as for research purposes in various areas of linguistics. All corpora are being built according to a standard methodology and using the same set of tools for processing and annotation, whichtogether with their standard size and-makes them also a valuable resource for translators and contrastive studies. All our corpora are freely available either via a web interface or in a source form in an annotated vertical format.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"This research has been, in part, funded by the VEGA Grant Agency (Grant Number 2\/0015\/14).","year":2016,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} +{"ID":"cattoni-etal-2002-adam","url":"http:\/\/www.lrec-conf.org\/proceedings\/lrec2002\/pdf\/237.pdf","title":"ADAM: The SI-TAL Corpus of Annotated Dialogues","abstract":"In this paper we describe the methodological assumptions, general architectural framework and annotation and encoding practices underlying the ADAM Corpus, which has been developed as part of the Italian national project SI-TAL. Each of the 450 dialogues is represented by an orthographic transcription and is annotated at five levels of linguistic information, namely prosody, pos tagging, syntax, semantics, and pragmatics. A coherent, unitary approach to design and application of annotation schemes was pursued across all annotation levels. Particular attention was paid in developing the schemes in order to be consistent with criteria of robustness, wide coverage and compliance with existing standards. The evaluation of the annotation revealed a high degree of either inter-annotator agreement and annotation accuracy, with very promising results for what concerns the usability of the annotation schemes proposed and the accuracy of the annotation applied to the corpus. The ADAM Corpus also represents an interesting experiment at the architectural design level, as the way in which the annotation is organized and structured, as well as represented in a given physical format, aims at maximizing further reusability of the annotated material in terms of wide circulability of the corpus across different annotation practices and research purposes.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2002,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} +{"ID":"stubbs-2011-mae","url":"https:\/\/aclanthology.org\/W11-0416","title":"MAE and MAI: Lightweight Annotation and Adjudication Tools","abstract":"MAE and MAI are lightweight annotation and adjudication tools for corpus creation. DTDs are used to define the annotation tags and attributes, including extent tags, link tags, and non-consuming tags. Both programs are written in Java and use a stand-alone SQLite database for storage and retrieval of annotation data. Output is in stand-off XML.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"Funding for this project development was provided by NIH grant NIHR21LM009633-02, PI: James Pustejovsky Many thanks to the annotators who helped me identify bugs in the software, particularly Cornelia Parkes, Cheryl Keenan, BJ Harshfield, and all the students in the Brandeis University Spring 2011 Computer Science 216 class.","year":2011,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} +{"ID":"li-etal-2012-separately","url":"https:\/\/aclanthology.org\/C12-1103","title":"A Separately Passive-Aggressive Training Algorithm for Joint POS Tagging and Dependency Parsing","abstract":"Recent study shows that parsing accuracy can be largely improved by the joint optimization of part-of-speech (POS) tagging and dependency parsing. However, the POS tagging task does not benefit much from the joint framework. We argue that the fundamental reason behind is because the POS features are overwhelmed by the syntactic features during the joint optimization, and the joint models only prefer such POS tags that are favourable solely from the parsing viewpoint. To solve this issue, we propose a separately passive-aggressive learning algorithm (SPA), which is designed to separately update the POS features weights and the syntactic feature weights under the joint optimization framework. The proposed SPA is able to take advantage of previous joint optimization strategies to significantly improve the parsing accuracy, but also overcome their shortages to significantly boost the tagging accuracy by effectively solving the syntax-insensitive POS ambiguity issues. Experiments on the Chinese Penn Treebank 5.1 (CTB5) and the English Penn Treebank (PTB) demonstrate the effectiveness of our proposed methodology and empirically verify our observations as discussed above. We achieve the best tagging and parsing accuracies on both datasets, 94.60% in tagging accuracy and 81.67% in parsing accuracy on CTB5, and 97.62% and 93.52% on PTB.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"We thank Meishan Zhang, for suggesting the easier way to incorporate the POS features during joint decoding, and the anonymous reviewers, for their valuable comments which lead to better understanding of the proposed SPA. This work was supported by National Natural Science Foundation of China (NSFC) via grant 61133012, the National \"863\" Major Projects via grant 2011AA01A207, and the National \"863\" Leading Technology Research Project via grant 2012AA011102.","year":2012,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} +{"ID":"dyvik-etal-2016-norgrambank","url":"https:\/\/aclanthology.org\/L16-1565","title":"NorGramBank: A `Deep' Treebank for Norwegian","abstract":"We present NorGramBank, a treebank for Norwegian with highly detailed LFG analyses. It is one of many treebanks made available through the INESS treebanking infrastructure. NorGramBank was constructed as a parsebank, i.e. by automatically parsing a corpus, using the wide coverage grammar NorGram. One part consisting of 350,000 words has been manually disambiguated using computer-generated discriminants. A larger part of 50 M words has been stochastically disambiguated. The treebank is dynamic: by global reparsing at certain intervals it is kept compatible with the latest versions of the grammar and the lexicon, which are continually further developed in interaction with the annotators. A powerful query language, INESS Search, has been developed for search across formalisms in the INESS treebanks, including LFG c-and f-structures. Evaluation shows that the grammar provides about 85% of randomly selected sentences with good analyses. Agreement among the annotators responsible for manual disambiguation is satisfactory, but also suggests desirable simplifications of the grammar.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2016,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} +{"ID":"sap-etal-2020-commonsense","url":"https:\/\/aclanthology.org\/2020.acl-tutorials.7","title":"Commonsense Reasoning for Natural Language Processing","abstract":"\"bumping into people annoys them\" or \"rain makes the road slippery\", helps humans navigate everyday situations seamlessly (Apperly, 2010 ). Yet, endowing machines with such human-like commonsense reasoning capabilities has remained an elusive goal of artificial intelligence research for decades (Gunning, 2018) .\nCommonsense knowledge and reasoning have received renewed attention from the natural language processing (NLP) community in recent years, yielding multiple exploratory research directions into automated commonsense understanding. Recent efforts to acquire and represent common knowledge resulted in large knowledge graphs, acquired through extractive methods (Speer et al., 2017) or crowdsourcing (Sap et al., 2019a) . Simultaneously, a large body of work in integrating reasoning capabilities into downstream tasks has emerged, allowing the development of smarter dialogue and question answering agents .","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2020,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} +{"ID":"murata-etal-2001-using","url":"https:\/\/aclanthology.org\/W01-1415","title":"Using a Support-Vector Machine for Japanese-to-English Translation of Tense, Aspect, and Modality","abstract":null,"label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2001,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} +{"ID":"crookston-1990-e","url":"https:\/\/aclanthology.org\/C90-2012","title":"The E-Framework: Emerging Problems","abstract":"Beth & Nygaard (1988) have described a formalism for NLP, the E-Framework (EFW). Two kinds of problem are emerging. Formally, there are problems with a complete formalisation of certain details of the EFW, but these will not be examined in this paper. Substantively, the question arises as to what mileage there is in this formalism tbr the MT problem. Possibly this question arises about any new NLP formalism, but Raw et al (1988) describe the EFW in an MT context. The EFW arose in reaction to the CAT forrealism for MT (Arnold & des Tombe (1987), Arnold et al (1986)). This was a sequential stratificational formalism in which each level of representation was policed by its own grammar. The essentials of this process can be diagrammed: (:) Grammar, Grammar.\/ I t generates generates Repni-t-grammar-* l%epnj *This research has been carried out within the British Group of the EUROTRA project, jointly funded by the Conunission of the European Colranunities and the United Khlgdom's Department of Trade and Industry. I an1 grateful for suggestions and comments from Doug Arnold, Lee Hmnphreya, Louisa Sadler, Andrew Way, and a COLING reviewer.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":1990,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} +{"ID":"shlain-etal-2020-syntactic","url":"https:\/\/aclanthology.org\/2020.acl-demos.3","title":"Syntactic Search by Example","abstract":"We present a system that allows a user to search a large linguistically annotated corpus using syntactic patterns over dependency graphs. In contrast to previous attempts to this effect, we introduce a lightweight query language that does not require the user to know the details of the underlying syntactic representations, and instead to query the corpus by providing an example sentence coupled with simple markup. Search is performed at an interactive speed due to an efficient linguistic graphindexing and retrieval engine. This allows for rapid exploration, development and refinement of syntax-based queries. We demonstrate the system using queries over two corpora: the English wikipedia, and a collection of English pubmed abstracts. A demo of the wikipedia system is avilable at: https: \/\/allenai.github.io\/spike\/ .","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"We thank the team at LUM.ai and the University of Arizona, in particular Mihai Surdeanu, Marco Valenzuela-Esc\u00e1rcega, Gus Hahn-Powell and Dane Bell, for fruitful discussion and their work on the Odinson system. This project has received funding from the Europoean Research Council (ERC) under the Europoean Union's Horizon 2020 research and innovation programme, grant agreement No. 802774 (iEXTRACT).","year":2020,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} +{"ID":"honovich-etal-2022-true","url":"https:\/\/aclanthology.org\/2022.dialdoc-1.19","title":"TRUE: Re-evaluating Factual Consistency Evaluation","abstract":"Grounded text generation systems often generate text that contains factual inconsistencies, hindering their real-world applicability. Automatic factual consistency evaluation may help alleviate this limitation by accelerating evaluation cycles, filtering inconsistent outputs and augmenting training data. While attracting increasing attention, such evaluation metrics are usually developed and evaluated in silo for a single task or dataset, slowing their adoption. Moreover, previous meta-evaluation protocols focused on system-level correlations with human annotations, which leave the examplelevel accuracy of such metrics unclear. In this work, we introduce TRUE: a comprehensive study of factual consistency metrics on a standardized collection of existing texts from diverse tasks, manually annotated for factual consistency. Our standardization enables an example-level meta-evaluation protocol that is more actionable and interpretable than previously reported correlations, yielding clearer quality measures. Across diverse stateof-the-art metrics and 11 datasets we find that large-scale NLI and question generation-andanswering-based approaches achieve strong and complementary results. We recommend those methods as a starting point for model and metric developers, and hope TRUE will foster progress towards even better methods. 1 * Work done during an internship at Google Research. 1 Our code will be made publicly available. Summarization (Wang et al., 2020) Input Phyllis schlafly, a leading figure in the us conservative movement, has died at her home in missouri, aged 92... Summary Us conservative activist phyllis schlafly has died at the age of 87.","label_nlp4sg":1,"task":[],"method":[],"goal1":"Peace, Justice and Strong Institutions","goal2":null,"goal3":null,"acknowledgments":null,"year":2022,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":1,"sdg17":0} +{"ID":"lefever-hoste-2010-semeval","url":"https:\/\/aclanthology.org\/S10-1003","title":"SemEval-2010 Task 3: Cross-Lingual Word Sense Disambiguation","abstract":"The goal of this task is to evaluate the feasibility of multilingual WSD on a newly developed multilingual lexical sample data set. Participants were asked to automatically determine the contextually appropriate translation of a given English noun in five languages, viz. Dutch, German, Italian, Spanish and French. This paper reports on the sixteen submissions from the five different participating teams.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2010,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} +{"ID":"liu-sarkar-2007-experimental","url":"https:\/\/aclanthology.org\/D07-1062","title":"Experimental Evaluation of LTAG-Based Features for Semantic Role Labeling","abstract":"This paper proposes the use of Lexicalized Tree-Adjoining Grammar (LTAG) formalism as an important additional source of features for the Semantic Role Labeling (SRL) task. Using a set of one-vs-all Support Vector Machines (SVMs), we evaluate these LTAG-based features. Our experiments show that LTAG-based features can improve SRL accuracy significantly. When compared with the best known set of features that are used in state of the art SRL systems we obtain an improvement in F-score from 82.34% to 85.25%.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"This research was partially supported by NSERC, Canada (RGPIN: 264905). We would like to thank Aravind Joshi, Libin Shen, and the anonymous reviewers for their comments.","year":2007,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} +{"ID":"jalali-farahani-ghassem-sani-2021-bert","url":"https:\/\/aclanthology.org\/2021.ranlp-1.73","title":"BERT-PersNER: A New Model for Persian Named Entity Recognition","abstract":"Named entity recognition (NER) is one of the major tasks in natural language processing. A named entity is often a word or expression that bears a valuable piece of information, which can be effectively employed by some major NLP tasks such as machine translation, question answering, and text summarization. In this paper, we introduce a new model called BERT-PersNER (BERT based Persian Named Entity Recognizer), in which we have applied transfer learning and active learning approaches to NER in Persian, which is regarded as a low-resource language. Like many others, we have used Conditional Random Field for tag decoding in our proposed architecture. BERT-PersNER has outperformed two available studies in Persian NER, in most cases of our experiments using the supervised learning approach on two Persian datasets called Arman and Peyma. Besides, as the very first effort to try active learning in the Persian NER, using only 30% of Arman and 20% of Peyma, we respectively achieved 92.15%, and 92.41% performance of the mentioned supervised learning experiments.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2021,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} +{"ID":"zaretskaya-2019-optimising","url":"https:\/\/aclanthology.org\/W19-8718","title":"Optimising the Machine Translation Post-editing Workflow","abstract":"As most large LSPs today, TransPerfect offers a variety of services based on machine translation (MT), including raw MT for casual low-cost translation, and different levels of MT postediting (MTPE). The volume of translations performed with MTPE in the company has been growing since 2016 and continues to grow to this date ( Figure 1 , the numbers on the Y axis have been omitted as commercially sensitive information), which means tens of millions of words post-edited each month. In order to implement MT at such a large scale, the process has to be as easy as possible for the users (Project Managers and translators), with minimal or no additional steps in the workflow.\nIn our case, MT is integrated in our translation management system, which makes it very easy to make the switch from purely human translation workflow to the post-editing workflow ( Figure 2 ). In this article we will share the methods we used to optimise the workflows when implementing MT, covering both the technical aspects and the processes involved. ","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2019,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} +{"ID":"luzzati-etal-2014-human","url":"http:\/\/www.lrec-conf.org\/proceedings\/lrec2014\/pdf\/771_Paper.pdf","title":"Human annotation of ASR error regions: Is ``gravity'' a sharable concept for human annotators?","abstract":"This paper is concerned with human assessments of the severity of errors in ASR outputs. We did not design any guidelines so that each annotator involved in the study could consider the \"seriousness\" of an ASR error using their own scientific background. Eight human annotators were involved in an annotation task on three distinct corpora, one of the corpora being annotated twice, hiding this annotation in duplicate to the annotators. None of the computed results (inter-annotator agreement, edit distance, majority annotation) allow any strong correlation between the considered criteria and the level of seriousness to be shown, which underlines the difficulty for a human to determine whether a ASR error is serious or not.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"This work was supported by the French National Agency for Research as part of the project VERA (adVanced ERrors Analysis for speech recognition) under grants ANR-2012-BS02-006-04. We thank Dr Paul Del\u00e9glise, Dr Yannick Est\u00e8ve and Dr Olivier Galibert for their help in this work and their useful comments.","year":2014,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} +{"ID":"das-kannan-2014-discovering","url":"https:\/\/aclanthology.org\/C14-1082","title":"Discovering Topical Aspects in Microblogs","abstract":"We address the problem of discovering topical phrases or \"aspects\" from microblogging sites like Twitter, that correspond to key talking points or buzz around a particular topic or entity of interest. Inferring such topical aspects enables various applications such as trend detection and opinion mining for business analytics. However, mining high-volume microblog streams for aspects poses unique challenges due to the inherent noise, redundancy and ambiguity in users' social posts. We address these challenges by using a probabilistic model that incorporates various global and local indicators such as \"uniqueness\", \"diversity\" and \"burstiness\" of phrases, to infer relevant aspects. Our model is learned using an EM algorithm that uses automatically generated noisy labels, without requiring manual effort or domain knowledge. We present results on three months of Twitter data across different types of entities to validate our approach.","label_nlp4sg":1,"task":[],"method":[],"goal1":"Peace, Justice and Strong Institutions","goal2":null,"goal3":null,"acknowledgments":null,"year":2014,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":1,"sdg17":0} +{"ID":"dekhili-etal-2019-augmenting","url":"https:\/\/aclanthology.org\/W19-3644","title":"Augmenting Named Entity Recognition with Commonsense Knowledge","abstract":null,"label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2019,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} +{"ID":"kebriaei-etal-2019-emad","url":"https:\/\/aclanthology.org\/S19-2107","title":"Emad at SemEval-2019 Task 6: Offensive Language Identification using Traditional Machine Learning and Deep Learning approaches","abstract":"In this paper, the used methods and the results obtained by our team, entitled Emad, on the OffensEval 2019 shared task organized at Se-mEval 2019 are presented. The OffensEval shared task includes three sub-tasks namely Offensive language identification, Automatic categorization of offense types and Offense target identification. We participated in subtask A and tried various methods including traditional machine learning methods, deep learning methods and also a combination of the first two sets of methods. We also proposed a data augmentation method using word embedding to improve the performance of our methods. The results show that the augmentation approach outperforms other methods in terms of macro-f1.","label_nlp4sg":1,"task":[],"method":[],"goal1":"Peace, Justice and Strong Institutions","goal2":null,"goal3":null,"acknowledgments":null,"year":2019,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":1,"sdg17":0} +{"ID":"van-noord-etal-1989-approach","url":"https:\/\/aclanthology.org\/E89-1040","title":"An Approach to Sentence-Level Anaphora in Machine Translation","abstract":"Theoretical research in the area of machine translation usually involves the search for and creation of an appropriate formalism. An important issue in this respect is the way in which the compositionality of translation is to be defined. In this paper, we will introduce the anaphoric component of the Mimo formalism. It makes the definition and translation of anaphoric relations possible, relations which are usually problematic for systems that adhere to strict compositionality. In iVlimo, the translation of anaphoric relations is compositional. The anaphoric component is used to define linguistic phenomena such as wh-movement, the passive and the binding of reflexives and pronouns monolingually. The actual working of the component will be shown in this paper by means of a detailed discussion of wh-movement.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"The work we report here hscl its beginnings in work within the Eurotra framework. MiMo however is not \"the\" official Eurotra system. It differs in many critical respects from e.g Bech & Nygaard (1988) . MiMo is the result of the joint effort of Essex, Utrecht and Dominique Petitpierre from ISSCO, Geneve. The research reported in this paper was supported by the European Community, the DTI (Department of Trade and Industry) and the NBBI (Nederlands Bureau voor Bibliotheekwezen en Informatieverzorging). S Shieber, 1986: An introduction to unification based approaches to grammar. CSLI 1988.","year":1989,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} +{"ID":"oshima-2017-remarks","url":"https:\/\/aclanthology.org\/Y17-1025","title":"Remarks on epistemically biased questions","abstract":"Some varieties of polar interrogatives (polar questions) convey an epistemic bias toward a positive or negative answer. This work takes up three paradigmatic kinds of biased polar interrogatives: (i) positively-biased negative polar interrogatives, (ii) negatively-biased negative polar interrogatives, and (iii) rising taginterrogatives, and aims to supplement existing descriptions of what they convey besides asking a question. The novel claims are: (i) a positively-biased negative polar interrogative conveys that the speaker assumes that the core proposition is likely to be something that is or should be activated in the hearer's mind, (ii) the bias induced by a negatively-biased negative polar interrogative makes reference to the speaker's assumptions about the hearer's beliefs, and (iii) the biases associated with the three constructions differ in strength, the one of the rising tag-interrogative being the strongest.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"Many thanks to David Beaver, John Beavers, Michael Everdell, Daniel Lassiter, Maribel Romero, Yasutada Sudo, and Stephen Wechsler for helpful comments and discussions. This work was supported by JSPS KAKENHI Grant Number 15K02476.","year":2017,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} +{"ID":"poswiata-perelkiewicz-2022-opi","url":"https:\/\/aclanthology.org\/2022.ltedi-1.40","title":"OPI@LT-EDI-ACL2022: Detecting Signs of Depression from Social Media Text using RoBERTa Pre-trained Language Models","abstract":"This paper presents our winning solution for the Shared Task on Detecting Signs of Depression from Social Media Text at LT-EDI-ACL2022. The task was to create a system that, given social media posts in English, should detect the level of depression as 'not depressed', 'moderately depressed' or 'severely depressed'. We based our solution on transformer-based language models. We fine-tuned selected models: BERT, RoBERTa, XLNet, of which the best results were obtained for RoBERTa large. Then, using the prepared corpus, we trained our own language model called DepRoBERTa (RoBERTa for Depression Detection). Fine-tuning of this model improved the results. The third solution was to use the ensemble averaging, which turned out to be the best solution. It achieved a macro-averaged F1-score of 0.583. The source code of prepared solution is available at https:\/\/github.com\/rafalposwiata\/depressiondetection-lt-edi-2022.","label_nlp4sg":1,"task":[],"method":[],"goal1":"Good Health and Well-Being","goal2":null,"goal3":null,"acknowledgments":null,"year":2022,"sdg1":0,"sdg2":0,"sdg3":1,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} +{"ID":"wilks-etal-2010-demonstration","url":"https:\/\/aclanthology.org\/P10-4013","title":"Demonstration of a Prototype for a Conversational Companion for Reminiscing about Images","abstract":"This paper describes an initial prototype demonstrator of a Companion, designed as a platform for novel approaches to the following: 1) The use of Information Extraction (IE) techniques to extract the content of incoming dialogue utterances after an Automatic Speech Recognition (ASR) phase, 2) The conversion of the input to Resource Descriptor Format (RDF) to allow the generation of new facts from existing ones, under the control of a Dialogue Manger (DM), that also has access to stored knowledge and to open knowledge accessed in real time from the web, all in RDF form, 3) A DM implemented as a stack and network virtual machine that models mixed initiative in dialogue control, and 4) A tuned dialogue act detector based on corpus evidence. The prototype platform was evaluated, and we describe this briefly; it is also designed to support more extensive forms of emotion detection carried by both speech and lexical content, as well as extended forms of machine learning.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"This work was funded by the Companions project (2006)(2007)(2008)(2009) sponsored by the European Commission as part of the Information Society Technologies (IST) programme under EC grant number IST-FP6-034434.","year":2010,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} +{"ID":"tan-etal-2013-learning","url":"https:\/\/aclanthology.org\/P13-2016","title":"Learning to Order Natural Language Texts","abstract":"Ordering texts is an important task for many NLP applications. Most previous works on summary sentence ordering rely on the contextual information (e.g. adjacent sentences) of each sentence in the source document. In this paper, we investigate a more challenging task of ordering a set of unordered sentences without any contextual information. We introduce a set of features to characterize the order and coherence of natural language texts, and use the learning to rank technique to determine the order of any two sentences. We also propose to use the genetic algorithm to determine the total order of all sentences. Evaluation results on a news corpus show the effectiveness of our proposed method.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"The work was supported by NSFC (61170166), Beijing Nova Program (2008B03) and National High-Tech R&D Program (2012AA011101).","year":2013,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} +{"ID":"debbarma-etal-2012-morphological","url":"https:\/\/aclanthology.org\/W12-5004","title":"Morphological Analyzer for Kokborok","abstract":"Morphological analysis is concerned with retrieving the syntactic and morphological properties or the meaning of a morphologically complex word. Morphological analysis retrieves the grammatical features and properties of an inflected word. However, this paper introduces the design and implementation of a Morphological Analyzer for Kokborok, a resource constrained and less computerized Indian language. A database driven affix stripping algorithm has been used to design the Morphological Analyzer. It analyzes the Kokborok word forms and produces several grammatical information associated with the words. The Morphological Analyzer for Kokborok has been tested on 56732 Kokborok words; thereby an accuracy of 80% has been obtained on a manual check.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2012,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} +{"ID":"sasaki-etal-2008-event","url":"https:\/\/aclanthology.org\/C08-1096","title":"Event Frame Extraction Based on a Gene Regulation Corpus","abstract":"This paper describes the supervised acquisition of semantic event frames based on a corpus of biomedical abstracts, in which the biological process of E. coli gene regulation has been linguistically annotated by a group of biologists in the EC research project \"BOOTStrep\". Gene regulation is one of the rapidly advancing areas for which information extraction could boost research. Event frames are an essential linguistic resource for extraction of information from biological literature. This paper presents a specification for linguistic-level annotation of gene regulation events, followed by novel methods of automatic event frame extraction from text. The event frame extraction performance has been evaluated with 10fold cross validation. The experimental results show that a precision of nearly 50% and a recall of around 20% are achieved. Since the goal of this paper is event frame extraction, rather than event instance extraction, the issue of low recall could be solved by applying the methods to a larger-scale corpus. 1 Introduction This paper describes the automatic extraction of linguistic event frames based on a corpus of MEDLINE abstracts that has been annotated with gene regulation events by a group of do","label_nlp4sg":1,"task":[],"method":[],"goal1":"Good Health and Well-Being","goal2":null,"goal3":null,"acknowledgments":null,"year":2008,"sdg1":0,"sdg2":0,"sdg3":1,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} +{"ID":"elming-habash-2007-combination","url":"https:\/\/aclanthology.org\/N07-2007","title":"Combination of Statistical Word Alignments Based on Multiple Preprocessing Schemes","abstract":"We present an approach to using multiple preprocessing schemes to improve statistical word alignments. We show a relative reduction of alignment error rate of about 38%.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2007,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} +{"ID":"fuchs-acriche-2022-product","url":"https:\/\/aclanthology.org\/2022.ecnlp-1.12","title":"Product Titles-to-Attributes As a Text-to-Text Task","abstract":"Online marketplaces use attribute-value pairs, such as brand, size, size type, color, etc. to help define important and relevant facts about a listing. These help buyers to curate their search results using attribute filtering and overall create a richer experience. Although their critical importance for listings' discoverability, getting sellers to input tens of different attribute-value pairs per listing is costly and often results in missing information. This can later translate to the unnecessary removal of relevant listings from the search results when buyers are filtering by attribute values. In this paper we demonstrate using a Text-to-Text hierarchical multilabel ranking model framework to predict the most relevant attributes per listing, along with their expected values, using historic user behavioral data. This solution helps sellers by allowing them to focus on verifying information on attributes that are likely to be used by buyers, and thus, increase the expected recall for their listings. Specifically for eBay's case we show that using this model can improve the relevancy of the attribute extraction process by 33.2% compared to the current highlyoptimized production system. Apart from the empirical contribution, the highly generalized nature of the framework presented in this paper makes it relevant for many high-volume search-driven websites.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2022,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} +{"ID":"rosario-hearst-2005-multi","url":"https:\/\/aclanthology.org\/H05-1092","title":"Multi-way Relation Classification: Application to Protein-Protein Interactions","abstract":"We address the problem of multi-way relation classification, applied to identification of the interactions between proteins in bioscience text. A major impediment to such work is the acquisition of appropriately labeled training data; for our experiments we have identified a database that serves as a proxy for training data. We use two graphical models and a neural net for the classification of the interactions, achieving an accuracy of 64% for a 10-way distinction between relation types. We also provide evidence that the exploitation of the sentences surrounding a citation to a paper can yield higher accuracy than other sentences.","label_nlp4sg":1,"task":[],"method":[],"goal1":"Good Health and Well-Being","goal2":null,"goal3":null,"acknowledgments":"Acknowledgments. We thank Janice Hamer for her help in labeling examples and other biological insights. This research was supported by a grant from NSF DBI-0317510 and a gift from Genentech.","year":2005,"sdg1":0,"sdg2":0,"sdg3":1,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} +{"ID":"hu-etal-2021-one","url":"https:\/\/aclanthology.org\/2021.eacl-main.296","title":"One-class Text Classification with Multi-modal Deep Support Vector Data Description","abstract":"This work presents multi-modal deep SVDD (mSVDD) for one-class text classification. By extending the uni-modal SVDD to a multiple modal one, we build mSVDD with multiple hyperspheres, that enable us to build a much better description for target one-class data. Additionally, the end-to-end architecture of mSVDD can jointly handle neural feature learning and one-class text learning. We also introduce a mechanism for incorporating negative supervision in the absence of real negative data, which can be beneficial to the mSVDD model. We conduct experiments on Reuters and 20 Newsgroup datasets, and the experimental results demonstrate that mSVDD outperforms uni-modal SVDD and mSVDD can get further improvements when negative supervision is incorporated.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":"The authors would like to gratefully acknowledge the anonymous reviewers for their helpful comments and suggestions. Chenlong Hu acknowledges the support from China Scholarship Council( CSC ).","year":2021,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} +{"ID":"hardalov-etal-2021-cross","url":"https:\/\/aclanthology.org\/2021.emnlp-main.710","title":"Cross-Domain Label-Adaptive Stance Detection","abstract":"Stance detection concerns the classification of a writer's viewpoint towards a target. There are different task variants, e.g., stance of a tweet vs. a full article, or stance with respect to a claim vs. an (implicit) topic. Moreover, task definitions vary, which includes the label inventory, the data collection, and the annotation protocol. All these aspects hinder cross-domain studies, as they require changes to standard domain adaptation approaches. In this paper, we perform an in-depth analysis of 16 stance detection datasets, and we explore the possibility for cross-domain learning from them. Moreover, we propose an end-to-end unsupervised framework for outof-domain prediction of unseen, user-defined labels. In particular, we combine domain adaptation techniques such as mixture of experts and domain-adversarial training with label embeddings, and we demonstrate sizable performance gains over strong baselines, both (i) indomain, i.e., for seen targets, and (ii) out-ofdomain, i.e., for unseen targets. Finally, we perform an exhaustive analysis of the crossdomain results, and we highlight the important factors influencing the model performance.","label_nlp4sg":1,"task":[],"method":[],"goal1":"Peace, Justice and Strong Institutions","goal2":null,"goal3":null,"acknowledgments":"We thank the anonymous reviewers for their helpful questions and comments, which have helped us improve the quality of the paper.We also would like to thank Guillaume Bouchard for the useful feedback. Finally, we thank the authors of the stance datasets for open-sourcing and providing us with their data.poledb We used the domains Healthcare, Guns, Gay Rights and God for training, Abortion for development, and Creation for testing.rumor We used the airfrance rumour for our test set, and we split the remaining data in ratio 9:1 for training and development, respectively.wtwt We used DIS_FOXA operation for testing, AET_HUM for development, and the rest for training. To standardize the targets, we rewrote them as sentences, i.e., company X acquires company Y.scd We used a split with Marijuana for development, Obama for testing, and the rest for training.semeval2016t6 We split it to increase the size of the development set.snopes We adjusted the splits for compatibility with the stance setup. We further extracted and converted the rumours and their evidence into targetcontext pairs.","year":2021,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":1,"sdg17":0} +{"ID":"cvrcek-etal-2012-legal","url":"http:\/\/www.lrec-conf.org\/proceedings\/lrec2012\/pdf\/775_Paper.pdf","title":"Legal electronic dictionary for Czech","abstract":"In the paper the results of the project of Czech Legal Electronic dictionary (PES) are presented. During the 4 year project the large legal terminological dictionary of Czech was created in the form of the electronic lexical database enriched with a hierarchical ontology of legal terms. It contains approx. 10,000 entries-legal terms together with their ontological relations and hypertext references. In the second part of the project the web interface based on the platform DEBII has been designed and implemented that allows users to browse and search effectively the database. At the same time the Czech Dictionary of Legal Terms will be generated from the database and later printed as a book. Inter-annotator's agreement in manual selection of legal terms was high-approx. 95 %.","label_nlp4sg":1,"task":[],"method":[],"goal1":"Peace, Justice and Strong Institutions","goal2":null,"goal3":null,"acknowledgments":null,"year":2012,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":1,"sdg17":0} +{"ID":"miller-1999-lexical","url":"https:\/\/aclanthology.org\/P99-1003","title":"The Lexical Component of Natural Language Processing","abstract":"Computational linguistics is generally considered to be the branch of engineering that uses computers to do useful things with linguistic signals, but it can also be viewed as an extended test of computational theories of human cognition; it is this latter perspective that psychologists find most interesting. Language provides a critical test for the hypothesis that physical symbol systems are adequate to perform all human cognitive functions. As yet, no adequate system for natural language processing has approached human levels of performance. Of the various problems that natural language processing has revealed, polysemy is probably the most frustrating. People deal with polysemy so easily that potential abiguities are overlooked, whereas computers must work hard to do far less well. A linguistic approach generally involves a parser, a lexicon, and some ad hoc rules for using linguistic context to identify the context-appropriate sense. A statistical approach generally involves the use of word co-occurrence statistics to create a semantic hyperspace where each word, regardless of its polysemy, is represented as a single vector. Each approach has strengths and limitations; some combination is often proposed. Various possibilities will be discussed in terms of their psychological plausibility.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":1999,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0} +{"ID":"battu-etal-2018-predicting","url":"https:\/\/aclanthology.org\/Y18-1007","title":"Predicting the Genre and Rating of a Movie Based on its Synopsis","abstract":"Movies are one of the most prominent means of entertainment. The widespread use of the Internet in recent times has led to large volumes of data related to movies being generated and shared online. People often prefer to express their views online in English as compared to other local languages. This leaves us with a very little amount of data in languages apart from English to work on. To overcome this, we created the Multi-Language Movie Review Dataset (MLMRD). The dataset consists of genre, rating, and synopsis of a movie across multiple languages, namely Hindi, Telugu, Tamil, Malayalam, Korean, French, and Japanese. The genre of a movie can be identified by its synopsis. Though the rating of a movie may depend on multiple factors like the performance of actors, screenplay, direction etc but in most of the cases, synopsis plays a crucial role in the movie rating. In this work, we provide various model architectures that can be used to predict the genre and the rating of a movie across various languages present in our dataset based on the synopsis.","label_nlp4sg":0,"task":[],"method":[],"goal1":null,"goal2":null,"goal3":null,"acknowledgments":null,"year":2018,"sdg1":0,"sdg2":0,"sdg3":0,"sdg4":0,"sdg5":0,"sdg6":0,"sdg7":0,"sdg8":0,"sdg9":0,"sdg10":0,"sdg11":0,"sdg12":0,"sdg13":0,"sdg14":0,"sdg15":0,"sdg16":0,"sdg17":0}