ID
stringlengths
11
54
url
stringlengths
33
64
title
stringlengths
11
184
abstract
stringlengths
17
3.87k
label_nlp4sg
bool
2 classes
task
sequence
method
sequence
goal1
stringclasses
9 values
goal2
stringclasses
9 values
goal3
stringclasses
1 value
acknowledgments
stringlengths
28
1.28k
year
stringlengths
4
4
sdg1
bool
1 class
sdg2
bool
1 class
sdg3
bool
2 classes
sdg4
bool
2 classes
sdg5
bool
2 classes
sdg6
bool
1 class
sdg7
bool
1 class
sdg8
bool
2 classes
sdg9
bool
2 classes
sdg10
bool
2 classes
sdg11
bool
2 classes
sdg12
bool
1 class
sdg13
bool
2 classes
sdg14
bool
1 class
sdg15
bool
1 class
sdg16
bool
2 classes
sdg17
bool
2 classes
wang-etal-2021-enhanced
https://aclanthology.org/2021.iwpt-1.20
Enhanced Universal Dependency Parsing with Automated Concatenation of Embeddings
This paper describes the system used in submission from SHANGHAITECH team to the IWPT 2021 Shared Task. Our system is a graph-based parser with the technique of Automated Concatenation of Embeddings (ACE). Because recent work found that better word representations can be obtained by concatenating different types of embeddings, we use ACE to automatically find the better concatenation of embeddings for the task of enhanced universal dependencies. According to official results averaged on 17 languages, our system ranks 2nd over 9 teams.
false
[]
[]
null
null
null
null
2021
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
agirre-etal-2009-use
https://aclanthology.org/2009.eamt-1.9
Use of Rich Linguistic Information to Translate Prepositions and Grammar Cases to Basque
This paper presents three successful techniques to translate prepositions heading verbal complements by means of rich linguistic information, in the context of a rule-based Machine Translation system for an agglutinative language with scarce resources. This information comes in the form of lexicalized syntactic dependency triples, verb subcategorization and manually coded selection rules based on lexical, syntactic and semantic information. The first two resources have been automatically extracted from monolingual corpora. The results obtained using a new evaluation methodology show that all proposed techniques improve precision over the baselines, including a translation dictionary compiled from an aligned corpus, and a state-of-the-art statistical Machine Translation system. The results also show that linguistic information in all three techniques are complementary, and that a combination of them obtains the best F-score results overall.
false
[]
[]
null
null
null
This research was supported in part by the Spanish Ministry of Education and Science (OpenMT: Open Source Machine Translation using hybrid methods, TIN2006-15307-C03-01; RICOTERM-3, HUM2007-65966.CO2-02) and the Regional Branch of the Basque Government (AnHITZ 2006: Language Technologies for Multilingual Interaction in Intelligent Environments, IE06-185). Gorka Labaka is supported by a PhD grant from the Basque Government (grant code, BFI05.326). Consumer corpus has been kindly supplied by Asier Alcázar from the University of Missouri-Columbia and by Eroski Fundazioa.
2009
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
lehtola-etal-1985-language
https://aclanthology.org/E85-1015.pdf
Language-Based Environment for Natural Language Parsing
). The left The righ constituent constituent stack stack The syntax of these declarations can be seen in Figure 3 .
false
[]
[]
null
null
null
null
1985
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
frej-etal-2020-wikir
https://aclanthology.org/2020.lrec-1.237.pdf
WIKIR: A Python Toolkit for Building a Large-scale Wikipedia-based English Information Retrieval Dataset
Over the past years, deep learning methods allowed for new state-of-the-art results in ad-hoc information retrieval. However such methods usually require large amounts of annotated data to be effective. Since most standard ad-hoc information retrieval datasets publicly available for academic research (e.g. Robust04, ClueWeb09) have at most 250 annotated queries, the recent deep learning models for information retrieval perform poorly on these datasets. These models (e.g. DUET, Conv-KNRM) are trained and evaluated on data collected from commercial search engines not publicly available for academic research which is a problem for reproducibility and the advancement of research. In this paper, we propose WIKIR: an open-source toolkit to automatically build large-scale English information retrieval datasets based on Wikipedia. WIKIR is publicly available on GitHub. We also provide wikIR78k and wikIRS78k: two large-scale publicly available datasets that both contain 78,628 queries and 3,060,191 (query, relevant documents) pairs.
false
[]
[]
null
null
null
The authors would like to thank Maximin Coavoux, 11 Emmanuelle Esperança-Rodier, 11 Lorraine Goeuriot, 11 William N. Havard, 11 Quentin Legros, 12 Fabien Ringeval, 11 and Loïc Vial 11 for their thoughtful comments and efforts towards improving our manuscript.
2020
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
alkhairy-etal-2020-finite
https://aclanthology.org/2020.lrec-1.473.pdf
Finite State Machine Pattern-Root Arabic Morphological Generator, Analyzer and Diacritizer
We describe and evaluate the Finite-State Arabic Morphologizer (FSAM)-a concatenative (prefix-stem-suffix) and templatic (rootpattern) morphologizer that generates and analyzes undiacritized Modern Standard Arabic (MSA) words, and diacritizes them. Our bidirectional unified-architecture finite state machine (FSM) is based on morphotactic MSA grammatical rules. The FSM models the root-pattern structure related to semantics and syntax, making it readily scalable unlike stem-tabulations in prevailing systems. We evaluate the coverage and accuracy of our model, with coverage being percentage of words in Tashkeela (a large corpus) that can be analyzed. Accuracy is computed against a gold standard, comprising words and properties, created from the intersection of UD PADT treebank and Tashkeela. Coverage of analysis (extraction of root and properties from word) is 82%. Accuracy results are: root computed from a word (92%), word generation from a root (100%), non-root properties of a word (97%), and diacritization (84%). FSAM's non-root results match or surpass MADAMIRA's, and root result comparisons are not made because of the concatenative nature of publicly available morphologizers.
false
[]
[]
null
null
null
null
2020
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
prudhommeaux-etal-2017-vector
https://aclanthology.org/P17-2006.pdf
Vector space models for evaluating semantic fluency in autism
A common test administered during neurological examination is the semantic fluency test, in which the patient must list as many examples of a given semantic category as possible under timed conditions. Poor performance is associated with neurological conditions characterized by impairments in executive function, such as dementia, schizophrenia, and autism spectrum disorder (ASD). Methods for analyzing semantic fluency responses at the level of detail necessary to uncover these differences have typically relied on subjective manual annotation. In this paper, we explore automated approaches for scoring semantic fluency responses that leverage ontological resources and distributional semantic models to characterize the semantic fluency responses produced by young children with and without ASD. Using these methods, we find significant differences in the semantic fluency responses of children with ASD, demonstrating the utility of using objective methods for clinical language analysis.
true
[]
[]
Good Health and Well-Being
null
null
This work was supported in part by NIH grants R01DC013996, R01DC012033, and R01DC007129. Any opinions, findings, conclusions or recommendations expressed in this publication are those of the authors and do not necessarily reflect the views of the NIH.
2017
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
false
shwartz-etal-2015-learning
https://aclanthology.org/K15-1018.pdf
Learning to Exploit Structured Resources for Lexical Inference
Massive knowledge resources, such as Wikidata, can provide valuable information for lexical inference, especially for proper-names. Prior resource-based approaches typically select the subset of each resource's relations which are relevant for a particular given task. The selection process is done manually, limiting these approaches to smaller resources such as WordNet, which lacks coverage of propernames and recent terminology. This paper presents a supervised framework for automatically selecting an optimized subset of resource relations for a given target inference task. Our approach enables the use of large-scale knowledge resources, thus providing a rich source of high-precision inferences over proper-names. 1
false
[]
[]
null
null
null
This work was supported by an Intel ICRI-CI grant, the Google Research Award Program and the German Research Foundation via the German-Israeli Project Cooperation (DIP, grant DA 1600/1-1).
2015
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
hokamp-etal-2019-evaluating
https://aclanthology.org/W19-5319.pdf
Evaluating the Supervised and Zero-shot Performance of Multi-lingual Translation Models
We study several methods for full or partial sharing of the decoder parameters of multilingual NMT models. Using only the WMT 2019 shared task parallel datasets for training, we evaluate both fully supervised and zero-shot translation performance in 110 unique translation directions. We use additional test sets and re-purpose evaluation methods recently used for unsupervised MT in order to evaluate zero-shot translation performance for language pairs where no gold-standard parallel data is available. To our knowledge, this is the largest evaluation of multilingual translation yet conducted in terms of the total size of the training data we use, and in terms of the number of zero-shot translation pairs we evaluate. We conduct an in-depth evaluation of the translation performance of different models, highlighting the trade-offs between methods of sharing decoder parameters. We find that models which have task-specific decoder parameters outperform models where decoder parameters are fully shared across all tasks.
false
[]
[]
null
null
null
null
2019
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
reed-etal-2008-linguistic
http://www.lrec-conf.org/proceedings/lrec2008/pdf/755_paper.pdf
The Linguistic Data Consortium Member Survey: Purpose, Execution and Results
The Linguistic Data Consortium (LDC) seeks to provide its members with quality linguistic resources and services. In order to pursue these ideals and to remain current, LDC monitors the needs and sentiments of its communities. One mechanism LDC uses to generate feedback on consortium and resource issues is the LDC Member Survey. The survey allows LDC Members and nonmembers to provide LDC with valuable insight into their own unique circumstances, their current and future data needs and their views on LDC's role in meeting them. When the 2006 Survey was found to be a useful tool for communicating with the Consortium membership, a 2007 Survey was organized and administered. As a result of the surveys, LDC has confirmed that it has made a positive impact on the community and has identified ways to improve the quality of service and the diversity of monthly offerings. Many respondents recommended ways to improve LDC's functions, ordering mechanism and webpage. Some of these comments have inspired changes to LDC's operation and strategy.
false
[]
[]
null
null
null
null
2008
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
headden-iii-etal-2006-learning
https://aclanthology.org/W06-1636.pdf
Learning Phrasal Categories
In this work we learn clusters of contextual annotations for non-terminals in the Penn Treebank. Perhaps the best way to think about this problem is to contrast our work with that of Klein and Manning (2003). That research used treetransformations to create various grammars with different contextual annotations on the non-terminals. These grammars were then used in conjunction with a CKY parser. The authors explored the space of different annotation combinations by hand. Here we try to automate the process-to learn the "right" combination automatically. Our results are not quite as good as those carefully created by hand, but they are close (84.8 vs 85.7).
false
[]
[]
null
null
null
The research presented here was funded in part by DARPA GALE contract HR 0011-06-20001.
2006
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
ahmed-butt-2011-discovering
https://aclanthology.org/W11-0132.pdf
Discovering Semantic Classes for Urdu N-V Complex Predicates
This paper reports on an exploratory investigation as to whether classes of Urdu N-V complex predicates can be identified on the basis syntactic patterns and lexical choices associated with the N-V complex predicates. Working with data from a POS annotated corpus, we show that choices with respect to the number of arguments, case marking on subjects and which light verbs are felicitous with which nouns depend heavily on the semantics of the noun in the N-V complex predicate. This initial work represents an important step towards identifying semantic criteria relevant for complex predicate formation. Identifying the semantic criteria and being able to systematically code them in turn represents a first step towards building up a lexical resource for nouns as part of developing natural language processing tools for the underresourced South Asian language Urdu.
false
[]
[]
null
null
null
null
2011
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
saint-dizier-2008-challenges
https://aclanthology.org/Y08-1006.pdf
Some Challenges of Advanced Question-Answering: an Experiment with How-to Questions
This paper is a contribution to text semantics processing and its application to advanced question-answering where a significant portion of a well-formed text is required as a response. We focus on procedural texts of various domains, and show how titles, instructions, instructional compounds and arguments can be extracted.
false
[]
[]
null
null
null
null
2008
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
yang-etal-2019-convolutional
https://aclanthology.org/N19-1407.pdf
Convolutional Self-Attention Networks
Self-attention networks (SANs) have drawn increasing interest due to their high parallelization in computation and flexibility in modeling dependencies. SANs can be further enhanced with multi-head attention by allowing the model to attend to information from different representation subspaces. In this work, we propose novel convolutional self-attention networks, which offer SANs the abilities to 1) strengthen dependencies among neighboring elements, and 2) model the interaction between features extracted by multiple attention heads. Experimental results of machine translation on different language pairs and model settings show that our approach outperforms both the strong Transformer baseline and other existing models on enhancing the locality of SANs. Comparing with prior studies, the proposed model is parameter free in terms of introducing no more parameters.
false
[]
[]
null
null
null
The work was partly supported by the National Natural Science Foundation of China (Grant No. 61672555), the Joint Project of Macao Science and Technology Development Fund and National Natural Science Foundation of China (Grant No. 045/2017/AFJ) and the Multiyear Research Grant from the University of Macau (Grant No. MYRG2017-00087-FST). We thank the anonymous reviewers for their insightful comments.
2019
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
kim-lee-2003-clause
https://aclanthology.org/U03-1005.pdf
S-clause segmentation for efficient syntactic analysis using decision trees
In dependency parsing of long sentences with fewer subjects than predicates, it is difficult to recognize which predicate governs which subject. To handle such syntactic ambiguity between subjects and predicates, this paper proposes an "Sclause" segmentation method, where an S(ubject)clause is defined as a group of words containing several predicates and their common subject. We propose an automatic S-clause segmentation method using decision trees. The S-clause information was shown to be very effective in analyzing long sentences, with an improved performance of 5 percent.
false
[]
[]
null
null
null
This work was supported by the Korea Science and Engineering Foundation (KOSEF) through the Advanced Information Technology Research Center(AITrc) and by the Brain Korea 21 Project in 2003.
2003
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
coster-kauchak-2011-simple
https://aclanthology.org/P11-2117.pdf
Simple English Wikipedia: A New Text Simplification Task
In this paper we examine the task of sentence simplification which aims to reduce the reading complexity of a sentence by incorporating more accessible vocabulary and sentence structure. We introduce a new data set that pairs English Wikipedia with Simple English Wikipedia and is orders of magnitude larger than any previously examined for sentence simplification. The data contains the full range of simplification operations including rewording, reordering, insertion and deletion. We provide an analysis of this corpus as well as preliminary results using a phrase-based translation approach for simplification.
false
[]
[]
null
null
null
null
2011
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
demollin-etal-2020-argumentation
https://aclanthology.org/2020.nl4xai-1.10.pdf
Argumentation Theoretical Frameworks for Explainable Artificial Intelligence
This paper discusses four major argumentation theoretical frameworks with respect to their use in support of explainable artificial intelligence (XAI). We consider these frameworks as useful tools for both system-centred and usercentred XAI. The former is concerned with the generation of explanations for decisions taken by AI systems, while the latter is concerned with the way explanations are given to users and received by them.
false
[]
[]
null
null
null
This project has received funding from the European Union's Horizon 2020 research and innovation programme under the Marie Skłodowska-Curie grant agreement No 860621.
2020
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
biesialska-etal-2020-enhancing
https://aclanthology.org/2020.acl-srw.36.pdf
Enhancing Word Embeddings with Knowledge Extracted from Lexical Resources
In this work, we present an effective method for semantic specialization of word vector representations. To this end, we use traditional word embeddings and apply specialization methods to better capture semantic relations between words. In our approach, we leverage external knowledge from rich lexical resources such as BabelNet. We also show that our proposed post-specialization method based on an adversarial neural network with the Wasserstein distance allows to gain improvements over state-of-the-art methods on two tasks: word similarity and dialog state tracking.
false
[]
[]
null
null
null
We thank the anonymous reviewers for their insightful comments. This work is supported in part by the Spanish Ministerio de Economía y Competitividad, the European Regional Development Fund through the postdoctoral senior grant Ramón y Cajal and by the Agencia Estatal de Investigación through the projects EUR2019-103819 and PCIN-2017-079.
2020
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
mckeown-paris-1987-functional
https://aclanthology.org/P87-1014.pdf
Functional Unification Grammar Revisited
In this paper, we show that one benefit of FUG, the ability to state global conslralnts on choice separately from syntactic rules, is difficult in generation systems based on augmented context free grammars (e.g., Def'mite Clause Cn'anmm~). They require that such constraints be expressed locally as part of syntactic rules and therefore, duplicated in the grammar. Finally, we discuss a reimplementation of lUg that achieves the similar levels of efficiency as Rubinoff's adaptation of MUMBLE, a detcrministc language generator.
false
[]
[]
null
null
null
The research reported in this paper was partially supported by DARPA grant N00039-84-C-0165, by ONR grant N00014-82-K-0256 and by NSF grant IST-84-51438. We would like to thank Bill Mann for making a portion of NIGEL's grammar available to us for comparisons.
1987
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
orav-etal-2018-estonian
https://aclanthology.org/2018.gwc-1.42.pdf
Estonian Wordnet: Current State and Future Prospects
This paper presents Estonian Wordnet (EstWN) with its latest developments. We are focusing on the time period of 2011-2017 because during this time EstWN project was supported by the National Programme for Estonian Language Technology (NPELT 1). We describe which were the goals at the beginning of 2011 and what are the accomplishments today. This paper serves as a summarizing report about the progress of EstWN during this programme. While building EstWN we have been concentrating on the fact, that EstWN as a valuable Estonian resource would also be compatible in a common multilingual framework.
false
[]
[]
null
null
null
null
2018
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
zhang-denero-2014-observational
https://aclanthology.org/P14-2132.pdf
Observational Initialization of Type-Supervised Taggers
Recent work has sparked new interest in type-supervised part-of-speech tagging, a data setting in which no labeled sentences are available, but the set of allowed tags is known for each word type. This paper describes observational initialization, a novel technique for initializing EM when training a type-supervised HMM tagger. Our initializer allocates probability mass to unambiguous transitions in an unlabeled corpus, generating token-level observations from type-level supervision. Experimentally, observational initialization gives state-of-the-art type-supervised tagging accuracy, providing an error reduction of 56% over uniform initialization on the Penn English Treebank. * Research conducted during an internship at Google.
false
[]
[]
null
null
null
null
2014
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
liu-etal-2012-expected
https://aclanthology.org/C12-2071.pdf
Expected Error Minimization with Ultraconservative Update for SMT
Minimum error rate training is a popular method for parameter tuning in statistical machine translation (SMT). However, the optimization objective function may change drastically at each optimization step, which may induce MERT instability. We propose an alternative tuning method based on an ultraconservative update, in which the combination of an expected task loss and the distance from the parameters in the previous round are minimized with a variant of gradient descent. Experiments on test datasets of both Chinese-to-English and Spanish-to-English translation show that our method can achieve improvements over MERT under the Moses system.
false
[]
[]
null
null
null
We would like to thank Muyun Yang and Hongfei Jiang for many valuable discussions and thank three anonymous reviewers for many valuable comments and helpful suggestions. This work was supported by National Natural Science Foundation of China (61173073,61100093,61073130,61272384), the Key Project of the National High Technology Research and Development Program of China (2011AA01A207), and and the Fundamental Research Funds for Central Universites (HIT.NSRIF.2013065).
2012
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
guggilla-etal-2016-cnn
https://aclanthology.org/C16-1258.pdf
CNN- and LSTM-based Claim Classification in Online User Comments
When processing arguments in online user interactive discourse, it is often necessary to determine their bases of support. In this paper, we describe a supervised approach, based on deep neural networks, for classifying the claims made in online arguments. We conduct experiments using convolutional neural networks (CNNs) and long short-term memory networks (LSTMs) on two claim data sets compiled from online user comments. Using different types of distributional word embeddings, but without incorporating any rich, expensive set of features, we achieve a significant improvement over the state of the art for one data set (which categorizes arguments as factual vs. emotional), and performance comparable to the state of the art on the other data set (which categorizes propositions according to their verifiability). Our approach has the advantages of using a generalized, simple, and effective methodology that works for claim categorization on different data sets and tasks.
true
[]
[]
Peace, Justice and Strong Institutions
null
null
This work was funded through the research training group Adaptive Preparation of Information from Heterogeneous Sources (AIPHES, GRK 1994/1) and through the German Research Foundation (DFG).
2016
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
true
false
meng-etal-2021-mixture
https://aclanthology.org/2021.emnlp-main.383.pdf
Mixture-of-Partitions: Infusing Large Biomedical Knowledge Graphs into BERT
Infusing factual knowledge into pretrained models is fundamental for many knowledgeintensive tasks. In this paper, we propose Mixture-of-Partitions (MoP), an infusion approach that can handle a very large knowledge graph (KG) by partitioning it into smaller subgraphs and infusing their specific knowledge into various BERT models using lightweight adapters. To leverage the overall factual knowledge for a target task, these sub-graph adapters are further fine-tuned along with the underlying BERT through a mixture layer. We evaluate our MoP with three biomedical BERTs (SciBERT, BioBERT, PubmedBERT) on six downstream tasks (inc. NLI, QA, Classification), and the results show that our MoP consistently enhances the underlying BERTs in task performance, and achieves new SOTA performances on five evaluated datasets. 1
true
[]
[]
Good Health and Well-Being
null
null
Nigel Collier and Zaiqiao Meng kindly acknowledge grant-in-aid funding from ESRC (grant number ES/T012277/1).
2021
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
false
du-ji-2019-empirical
https://aclanthology.org/D19-1619.pdf
An Empirical Comparison on Imitation Learning and Reinforcement Learning for Paraphrase Generation
Generating paraphrases from given sentences involves decoding words step by step from a large vocabulary. To learn a decoder, supervised learning which maximizes the likelihood of tokens always suffers from the exposure bias. Although both reinforcement learning (RL) and imitation learning (IL) have been widely used to alleviate the bias, the lack of direct comparison leads to only a partial image on their benefits. In this work, we present an empirical study on how RL and IL can help boost the performance of generating paraphrases, with the pointer-generator as a base model 1. Experiments on the benchmark datasets show that (1) imitation learning is constantly better than reinforcement learning; and (2) the pointer-generator models with imitation learning outperform the state-of-theart methods with a large margin.
false
[]
[]
null
null
null
The authors thank three anonymous reviewers for their useful comments and the UVa NLP group for helpful discussion. This research was supported in part by a gift from Tencent AI Lab Rhino-Bird Gift Fund.
2019
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
le-hong-etal-2009-finite
https://aclanthology.org/W09-3409.pdf
Finite-State Description of Vietnamese Reduplication
We present for the first time a computational model for the reduplication of the Vietnamese language. Reduplication is a popular phenomenon of Vietnamese in which reduplicative words are created by the combination of multiple syllables whose phonics are similar. We first give a systematical study of Vietnamese reduplicative words, bringing into focus clear principles for the formation of a large class of bi-syllabic reduplicative words. We then make use of optimal finite-state devices, in particular minimal sequential string-to string transducers to build a computational model for very efficient recognition and production of those words. Finally, several nice applications of this computational model are discussed.
false
[]
[]
null
null
null
We gratefully acknowledge helpful comments and valuable suggestions from three anonymous reviewers for improving the paper.
2009
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
jacobs-etal-1991-lexico
https://aclanthology.org/H91-1066.pdf
Lexico-Semantic Pattern Matching as a Companion to Parsing in Text Understanding
Ordinarily, one thinks of the problem of natural language understanding as one of making a single, left-to-right pass through an input, producing a progressively refined and detailed interpretation. In text interpretation, however, the constraints of strict left-to-right processing are an encumbrance. Multi-pass methods, especially by interpreting words using corpus data and associating units of text with possible interpretations, can be more accurate and faster than single-pass methods of data extraction. Quality improves because corpus-based data and global context help to control false interpretations; speed improves because processing focuses on relevant sections. The most useful forms of pre-processing for text interpretation use fairly superficial analysis that complements the style of ordinary parsing but uses much of the same knowledge base. Lexico-semantic pattern matching, with rules that combine lexlocal analysis with ordering and semantic categories, is a good method for this form of analysis. This type of pre-processing is efficient, takes advantage of corpus data, prevents many garden paths and fruitless parses, and helps the parser cope with the complexity and flexibility of real text.
false
[]
[]
null
null
null
null
1991
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
sarkar-haffari-2006-tutorial
https://aclanthology.org/N06-5005.pdf
Tutorial on Inductive Semi-supervised Learning Methods: with Applicability to Natural Language Processing
Supervised machine learning methods which learn from labelled (or annotated) data are now widely used in many different areas of Computational Linguistics and Natural Language Processing. There are widespread data annotation endeavours but they face problems: there are a large number of languages and annotation is expensive, while at the same time raw text data is plentiful. Semi-supervised learning methods aim to close this gap. The last 6-7 years have seen a surge of interest in semi-supervised methods in the machine learning and NLP communities focused on the one hand on analysing the situations in which unlabelled data can be useful, and on the other hand, providing feasible learning algorithms. This recent research has resulted in a wide variety of interesting methods which are different with respect to the assumptions they make about the learning task. In this tutorial, we survey recent semi-supervised learning methods, discuss assumptions behind various approaches, and show how some of these methods have been applied to NLP tasks.
false
[]
[]
null
null
null
null
2006
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
chang-etal-2015-ct
https://aclanthology.org/W15-3125.pdf
CT-SPA: Text sentiment polarity prediction model using semi-automatically expanded sentiment lexicon
In this study, an automatic classification method based on the sentiment polarity of text is proposed. This method uses two sentiment dictionaries from different sources: the Chinese sentiment dictionary CSWN that integrates Chinese WordNet with SentiWordNet, and the sentiment dictionary obtained from a training corpus labeled with sentiment polarities. In this study, the sentiment polarity of text is analyzed using these two dictionaries, a mixed-rule approach, and a statistics-based prediction model. The proposed method is used to analyze a test corpus provided by the Topic-Based Chinese Message Polarity Classification task of SIGHAN-8, and the F1measure value is tested at 0.62.
false
[]
[]
null
null
null
null
2015
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
hammond-2021-data
https://aclanthology.org/2021.sigmorphon-1.14.pdf
Data augmentation for low-resource grapheme-to-phoneme mapping
In this paper we explore a very simple neural approach to mapping orthography to phonetic transcription in a low-resource context. The basic idea is to start from a baseline system and focus all efforts on data augmentation. We will see that some techniques work, but others do not.
false
[]
[]
null
null
null
Thanks to Diane Ohala for useful discussion. Thanks to several anonymous reviewers for very helpful feedback. All errors are my own.
2021
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
power-scott-2005-automatic
https://aclanthology.org/I05-5010.pdf
Automatic generation of large-scale paraphrases
Research on paraphrase has mostly focussed on lexical or syntactic variation within individual sentences. Our concern is with larger-scale paraphrases, from multiple sentences or paragraphs to entire documents. In this paper we address the problem of generating paraphrases of large chunks of texts. We ground our discussion through a worked example of extending an existing NLG system to accept as input a source text, and to generate a range of fluent semantically-equivalent alternatives, varying not only at the lexical and syntactic levels, but also in document structure and layout.
false
[]
[]
null
null
null
null
2005
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
britz-etal-2017-efficient
https://aclanthology.org/D17-1040.pdf
Efficient Attention using a Fixed-Size Memory Representation
The standard content-based attention mechanism typically used in sequence-to-sequence models is computationally expensive as it requires the comparison of large encoder and decoder states at each time step. In this work, we propose an alternative attention mechanism based on a fixed size memory representation that is more efficient. Our technique predicts a compact set of K attention contexts during encoding and lets the decoder compute an efficient lookup that does not need to consult the memory. We show that our approach performs on-par with the standard attention mechanism while yielding inference speedups of 20% for real-world translation tasks and more for tasks with longer sequences. By visualizing attention scores we demonstrate that our models learn distinct, meaningful alignments.
false
[]
[]
null
null
null
null
2017
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
nguyen-etal-2016-empirical
https://aclanthology.org/U16-1017.pdf
An empirical study for Vietnamese dependency parsing
This paper presents an empirical comparison of different dependency parsers for Vietnamese, which has some unusual characteristics such as copula drop and verb serialization. Experimental results show that the neural network-based parsers perform significantly better than the traditional parsers. We report the highest parsing scores published to date for Vietnamese with the labeled attachment score (LAS) at 73.53% and the unlabeled attachment score (UAS) at 80.66%.
false
[]
[]
null
null
null
The first author is supported by an International Postgraduate Research Scholarship and a NICTA NRPA Top-Up Scholarship.
2016
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
lin-2004-rouge
https://aclanthology.org/W04-1013.pdf
ROUGE: A Package for Automatic Evaluation of Summaries
ROUGE stands for Recall-Oriented Understudy for Gisting Evaluation. It includes measures to automatically determine the quality of a summary by comparing it to other (ideal) summaries created by humans. The measures count the number of overlapping units such as n-gram, word sequences, and word pairs between the computer-generated summary to be evaluated and the ideal summaries created by humans. This paper introduces four different ROUGE measures: ROUGE-N, ROUGE-L, ROUGE-W, and ROUGE-S included in the ROUGE summarization evaluation package and their evaluatio ns. Three of them have been used in the Document Understanding Conference (DUC) 2004, a large-scale summarization evaluation sponsored by NIST.
false
[]
[]
null
null
null
The author would like to thank the anonymous reviewers for their constructive comments, Paul Over at NIST, U.S.A, and ROUGE users around the world for testing and providing useful feedback on earlier versions of the ROUGE evaluation package, and the DARPA TIDES project for supporting this research.
2004
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
felice-briscoe-2015-towards
https://aclanthology.org/N15-1060.pdf
Towards a standard evaluation method for grammatical error detection and correction
We present a novel evaluation method for grammatical error correction that addresses problems with previous approaches and scores systems in terms of improvement on the original text. Our method evaluates corrections at the token level using a globally optimal alignment between the source, a system hypothesis, and a reference. Unlike the M 2 Scorer, our method provides scores for both detection and correction and is sensitive to different types of edit operations.
false
[]
[]
null
null
null
We would like to thank Øistein Andersen and Zheng Yuan for their constructive feedback, as well as the anonymous reviewers for their comments and suggestions. We are also grateful to Cambridge English Language Assessment for supporting this research via the ALTA Institute.
2015
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
klimek-etal-2016-creating
https://aclanthology.org/L16-1143.pdf
Creating Linked Data Morphological Language Resources with MMoOn - The Hebrew Morpheme Inventory
The development of standard models for describing general lexical resources has led to the emergence of numerous lexical datasets of various languages in the Semantic Web. However, there are no models that describe the domain of morphology in a similar manner. As a result, there are hardly any language resources of morphemic data available in RDF to date. This paper presents the creation of the Hebrew Morpheme Inventory from a manually compiled tabular dataset comprising around 52.000 entries. It is an ongoing effort of representing the lexemes, word-forms and morphologigal patterns together with their underlying relations based on the newly created Multilingual Morpheme Ontology (MMoOn). It will be shown how segmented Hebrew language data can be granularly described in a Linked Data format, thus, serving as an exemplary case for creating morpheme inventories of any inflectional language with MMoOn. The resulting dataset is described a) according to the structure of the underlying data format, b) with respect to the Hebrew language characteristic of building word-forms directly from roots, c) by exemplifying how inflectional information is realized and d) with regard to its enrichment with external links to sense resources.
false
[]
[]
null
null
null
This paper's research activities were partly supported and funded by grants from the FREME FP7 European project
2016
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
lu-roth-2012-automatic
https://aclanthology.org/P12-1088.pdf
Automatic Event Extraction with Structured Preference Modeling
This paper presents a novel sequence labeling model based on the latent-variable semi-Markov conditional random fields for jointly extracting argument roles of events from texts. The model takes in coarse mention and type information and predicts argument roles for a given event template. This paper addresses the event extraction problem in a primarily unsupervised setting, where no labeled training instances are available. Our key contribution is a novel learning framework called structured preference modeling (PM), that allows arbitrary preference to be assigned to certain structures during the learning procedure. We establish and discuss connections between this framework and other existing works. We show empirically that the structured preferences are crucial to the success of our task. Our model, trained without annotated data and with a small number of structured preferences, yields performance competitive to some baseline supervised approaches.
false
[]
[]
null
null
null
We would like to thank Yee Seng Chan, Mark Sammons, and Quang Xuan Do for their help with the mention identification and typing system used in this paper. We gratefully acknowledge the support of the Defense Advanced Research Projects Agency (DARPA) Machine Reading Program under Air Force Research Laboratory (AFRL) prime contract no. FA8750-09-C-0181. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect the view of DARPA, AFRL, or the US government.
2012
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
rama-wichmann-2018-towards
https://aclanthology.org/C18-1134.pdf
Towards identifying the optimal datasize for lexically-based Bayesian inference of linguistic phylogenies
Bayesian linguistic phylogenies are standardly based on cognate matrices for words referring to a fix set of meanings-typically around 100-200. To this day there has not been any empirical investigation into which datasize is optimal. Here we determine, across a set of language families, the optimal number of meanings required for the best performance in Bayesian phylogenetic inference. We rank meanings by stability, infer phylogenetic trees using first the most stable meaning, then the two most stable meanings, and so on, computing the quartet distance of the resulting tree to the tree proposed by language family experts at each step of datasize increase. When a gold standard tree is not available we propose to instead compute the quartet distance between the tree based on the n-most stable meaning and the one based on the n + 1-most stable meanings, increasing n from 1 to N − 1, where N is the total number of meanings. The assumption here is that the value of n for which the quartet distance begins to stabilize is also the value at which the quality of the tree ceases to improve. We show that this assumption is borne out. The results of the two methods vary across families, and the optimal number of meanings appears to correlate with the number of languages under consideration.
false
[]
[]
null
null
null
The first author is supported by BIGMED project (a Norwegian Research Council LightHouse grant, see bigmed.no). The second author is supported by a subsidy of the Russian Government to support the Programme of Competitive Development of Kazan Federal University. The experiments were performed when both authors took part in the ERC Advanced Grant 324246 EVOLAEMP project led by Gerhard Jäger. All these sources of support are gratefully acknowledged.
2018
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
agirre-soroa-2009-personalizing
https://aclanthology.org/E09-1005.pdf
Personalizing PageRank for Word Sense Disambiguation
In this paper we propose a new graphbased method that uses the knowledge in a LKB (based on WordNet) in order to perform unsupervised Word Sense Disambiguation. Our algorithm uses the full graph of the LKB efficiently, performing better than previous approaches in English all-words datasets. We also show that the algorithm can be easily ported to other languages with good results, with the only requirement of having a wordnet. In addition, we make an analysis of the performance of the algorithm, showing that it is efficient and that it could be tuned to be faster.
false
[]
[]
null
null
null
This work has been partially funded by the EU Commission (project KYOTO ICT-2007-211423) and Spanish Research Department (project KNOW TIN2006-15049-C03-01).
2009
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
oseki-etal-2019-inverting
https://aclanthology.org/W19-4220.pdf
Inverting and Modeling Morphological Inflection
Previous "wug" tests (Berko, 1958) on Japanese verbal inflection have demonstrated that Japanese speakers, both adults and children, cannot inflect novel present tense forms to "correct" past tense forms predicted by rules of existent verbs (
false
[]
[]
null
null
null
We would like to thank Takane Ito, Ryo Otoguro, Yoko Sugioka, and SIGMORPHON anonymous reviewers for valuable suggestions. This work was supported by JSPS KAKENHI Grant Number JP18H05589.
2019
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
aksenova-deshmukh-2018-formal
https://aclanthology.org/W18-0307.pdf
Formal Restrictions On Multiple Tiers
In this paper, we use harmony systems with multiple feature spreadings as a litmus test for the possible configurations of items involved in certain dependence. The subregular language classes, and the class of tierbased strictly local (TSL) languages in particular, have shown themselves as a good fit for different aspects of natural language. It is also known that there are some patterns that cannot be captured by a single TSL grammar. However, no proposed limitations exist on tier alphabets of several cooperating TSL grammars. While theoretically possible relations among tier alphabets of several TSL grammars are containment, disjunction and intersection, the latter one appears to be unattested. Apart from presenting the typological overview, we discuss formal reasons that might explain such distribution.
false
[]
[]
null
null
null
We thank the anonymous referees for their useful comments and suggestions. We are very grateful to our friends and colleagues at Stony Brook University, especially to Thomas Graf, Lori Repetti, Jeffrey Heinz, and Aniello De Santo for their unlimited knowledge and constant help. Also big thanks to Gary Mar, Jonathan Rawski, Sedigheh Moradi, and Yaobin Liu for valuable comments on the paper. All mistakes, of course, are our own.
2018
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
bollmann-etal-2014-cora
https://aclanthology.org/W14-0612.pdf
CorA: A web-based annotation tool for historical and other non-standard language data
We present CorA, a web-based annotation tool for manual annotation of historical and other non-standard language data. It allows for editing the primary data and modifying token boundaries during the annotation process. Further, it supports immediate retraining of taggers on newly annotated data.
false
[]
[]
null
null
null
null
2014
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
hsie-etal-2003-interleaving
https://aclanthology.org/O03-3002.pdf
Interleaving Text and Punctuations for Bilingual Sub-sentential Alignment
We present a new approach to aligning bilingual English and Chinese text at sub-sentential level by interleaving alphabetic texts and punctuations matches. With sub-sentential alignment, we expect to improve the effectiveness of alignment at word, chunk and phrase levels and provide finer grained and more reusable translation memory.
false
[]
[]
null
null
null
We acknowledge the support for this study through grants from Ministry of Education, Taiwan (MOE EX-91-E-FA06-4-4). Thanks are also due to Jim Chang for preparing the training data and evaluating the experimental results.
2003
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
phi-matsumoto-2016-integrating
https://aclanthology.org/Y16-2015.pdf
Integrating Word Embedding Offsets into the Espresso System for Part-Whole Relation Extraction
Part-whole relation, or meronymy plays an important role in many domains. Among approaches to addressing the part-whole relation extraction task, the Espresso bootstrapping algorithm has proved to be effective by significantly improving recall while keeping high precision. In this paper, we first investigate the effect of using fine-grained subtypes and careful seed selection step on the performance of extracting part-whole relation. Our multitask learning and careful seed selection were major factors for achieving higher precision. Then, we improve the Espresso bootstrapping algorithm for part-whole relation extraction task by integrating word embedding approach into its iterations. The key idea of our approach is utilizing an additional ranker component, namely Similarity Ranker in the Instances Extraction phase of the Espresso system. This ranker component uses embedding offset information between instance pairs of part-whole relation. The experiments show that our proposed system achieved a precision of 84.9% for harvesting instances of the partwhole relation, and outperformed the original Espresso system.
false
[]
[]
null
null
null
null
2016
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
molinero-etal-2009-building
https://aclanthology.org/W09-4619.pdf
Building a morphological and syntactic lexicon by merging various linguistic resources
This paper shows how large-coverage morphological and syntactic NLP lexicons can be developed by interpreting, converting to a common format and merging existing lexical resources. Applied on Spanish, this allowed us to build a morphological and syntactic lexicon, the Leffe. It relies on the Alexina framework, originally developed together with the French lexicon Lefff. We describe how the input resources-two morphological and two syntactic lexicons-were converted into Alexina lexicons and merged. A preliminary evaluation shows that merging different sources of lexical information is indeed a good approach to improve the development speed, the coverage and the precision of linguistic resources.
false
[]
[]
null
null
null
" 2006-2009).We would like also to thank group Gramática delEspañol from USC, and especially to Guillermo Rojo, M. a Paula Santalla and Susana Sotelo, for granting us access to their lexicon.
2009
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
dligach-palmer-2008-novel
https://aclanthology.org/P08-2008.pdf
Novel Semantic Features for Verb Sense Disambiguation
We propose a novel method for extracting semantic information about a verb's arguments and apply it to Verb Sense Disambiguation (VSD). We contrast this m ethod with two popular approaches to retrieving this information and show that it improves the performance of our VSD system and outperforms the other two approaches
false
[]
[]
null
null
null
We gratefully acknowledge the support of the National Science Foundation Grant NSF-0715078, Consistent Criteria for Word Sense Disambiguation, and the GALE program of the Defense Advanced Research Projects Agency, Contract No. HR0011-06-C-0022, a subcontract from the BBN-AGILE Team. Any opinions, findings, and conclusions or recommendations expressed in this ma-terial are those of the authors and do not necessarily reflect the views of the National Sc ience Foundation. We also thank our colleagues Rodney Nielsen and Philipp Wetzler for parsing English Gigaword with MaltParser.
2008
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
jin-etal-2021-cogie
https://aclanthology.org/2021.acl-demo.11.pdf
CogIE: An Information Extraction Toolkit for Bridging Texts and CogNet
CogNet is a knowledge base that integrates three types of knowledge: linguistic knowledge, world knowledge and commonsense knowledge. In this paper, we propose an information extraction toolkit, called CogIE, which is a bridge connecting raw texts and CogNet. CogIE has three features: versatile, knowledge-grounded and extensible. First, CogIE is a versatile toolkit with a rich set of functional modules, including named entity recognition, entity typing, entity linking, relation extraction, event extraction and framesemantic parsing. Second, as a knowledgegrounded toolkit, CogIE can ground the extracted facts to CogNet and leverage different types of knowledge to enrich extracted results. Third, for extensibility, owing to the design of three-tier architecture, CogIE is not only a plug-and-play toolkit for developers but also an extensible programming framework for researchers. We release an open-access online system 1 to visually extract information from texts. Source code, datasets and pre-trained models are publicly available at GitHub 2 , with a short instruction video 3 .
false
[]
[]
null
null
null
This work is supported by the National Key Research and Development Program of China (No. 2020AAA0106400), the National Natural Science Foundation of China (No.61806201).
2021
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
cao-zukerman-2012-experimental
https://aclanthology.org/U12-1008.pdf
Experimental Evaluation of a Lexicon- and Corpus-based Ensemble for Multi-way Sentiment Analysis
We describe a probabilistic approach that combines information obtained from a lexicon with information obtained from a Naïve Bayes (NB) classifier for multi-way sentiment analysis. Our approach also employs grammatical structures to perform adjustments for negations, modifiers and sentence connectives. The performance of this method is compared with that of an NB classifier with feature selection, and MCST-a state-of-the-art system. The results of our evaluation show that the performance of our hybrid approach is at least as good as that of these systems. We also examine the influence of three factors on performance: (1) sentiment-ambiguous sentences, (2) probability of the most probable star rating, and (3) coverage of the lexicon and the NB classifier. Our results indicate that the consideration of these factors supports the identification of regions of improved reliability for sentiment analysis.
false
[]
[]
null
null
null
null
2012
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
yang-etal-2021-journalistic
https://aclanthology.org/2021.emnlp-main.419.pdf
Journalistic Guidelines Aware News Image Captioning
The task of news article image captioning aims to generate descriptive and informative captions for news article images. Unlike conventional image captions that simply describe the content of the image in general terms, news image captions follow journalistic guidelines and rely heavily on named entities to describe the image content, often drawing context from the whole article they are associated with. In this work, we propose a new approach to this task, motivated by caption guidelines that journalists follow. Our approach, Journalistic Guidelines Aware News Image Captioning (JoGANIC), leverages the structure of captions to improve the generation quality and guide our representation design. Experimental results, including detailed ablation studies, on two large-scale publicly available datasets show that JoGANIC substantially outperforms state-of-the-art methods both on caption generation and named entity related metrics.
false
[]
[]
null
null
null
We thank Mahdi Abavisani, Shengli Hu, and Di Lu for the fruitful discussions during the development of the method, and all the reviewers for their detailed questions, clarification requests, and suggestions on the paper.
2021
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
ide-romary-2003-outline
https://aclanthology.org/W03-1901.pdf
Outline of the International Standard Linguistic Annotation Framework
This paper describes the outline of a linguistic annotation framework under development by ISO TC37 SC WG1-1. This international standard provides an architecture for the creation, annotation, and manipulation of linguistic resources and processing software. The goal is to provide maximum flexibility for encoders and annotators, while at the same time enabling interchange and re-use of annotated linguistic resources. We describe here the outline of the standard for the purposes of enabling annotators to begin to explore how their schemes may map into the framework.
false
[]
[]
null
null
null
null
2003
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
levy-etal-2014-ontology
https://aclanthology.org/W14-6003.pdf
Ontology-based Technical Text Annotation
Powerful tools could help users explore and maintain domain specific documentations, provided that documents have been semantically annotated. For that, the annotations must be sufficiently specialized and rich, relying on some explicit semantic model, usually an ontology, that represents the semantics of the target domain. In this paper, we learn to annotate biomedical scientific publications with respect to a Gene Regulation Ontology. We devise a two-step approach to annotate semantic events and relations. The first step is recast as a text segmentation and labeling problem and solved using machine translation tools and a CRF, the second as multi-class classification. We evaluate the approach on the BioNLP-GRO benchmark, achieving an average 61% F-measure on the event detection by itself and 50% F-measure on biological relation annotation. This suggests that human annotators can be supported in domain specific semantic annotation tasks. Under different experimental settings, we also conclude some interesting observations: (1) For event detection and compared to classical time-consuming sequence labeling approach, the newly proposed machine translation based method performed equally well but with much less computation resource required. (2) A highly domain specific part of the task, namely proteins and transcription factors detection, is best performed by domain aware tools, which can be used separately as an initial step of the pipeline. This work is licensed under a Creative Commons Attribution 4.0 International Licence.
true
[]
[]
Industry, Innovation and Infrastructure
Good Health and Well-Being
null
We are thankful to the reviewers for their comments. This work is part of the program Investissements d'Avenir, overseen by the French National Research Agency, ANR-10-LABX-0083, (Labex EFL). We acknowledge financial support by the DFG Research Unit FOR 1513, project B1.
2014
false
false
true
false
false
false
false
false
true
false
false
false
false
false
false
false
false
huang-etal-2003-unified
https://aclanthology.org/2003.mtsummit-papers.23.pdf
A unified statistical model for generalized translation memory system
We introduced, for Translation Memory System, a statistical framework, which unifies the different phases in a Translation Memory System by letting them constrain each other, and enables Translation Memory System a statistical qualification. Compared to traditional Translation Memory Systems, our model operates at a fine grained sub-sentential level such that it improves the translation coverage. Compared with other approaches that exploit sub-sentential benefits, it unifies the processes of source string segmentation, best example selection, and translation generation by making them constrain each other via the statistical confidence of each step. We realized this framework into a prototype system. Compared with an existing product Translation Memory System, our system exhibits obviously better performance in the "assistant quality metric" and gains improvements in the range of 26.3% to 55.1% in the "translation efficiency metric".
false
[]
[]
null
null
null
null
2003
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
mendes-etal-2016-modality
https://aclanthology.org/2016.lilt-14.5.pdf
Modality annotation for Portuguese: from manual annotation to automatic labeling
We investigate modality in Portuguese and we combine a linguistic perspective with an application-oriented perspective on modality. We design an annotation scheme reflecting theoretical linguistic concepts and apply this schema to a small corpus sample to show how the scheme deals with real world language usage. We present two schemas for Portuguese, one for spoken Brazilian Portuguese and one for written European Portuguese. Furthermore, we use the annotated data not only to study the linguistic phenomena of modality, but also to train a practical text mining tool to detect modality in text automatically. The modality tagger uses a machine learning classifier trained on automatically extracted features from a syntactic parser. As we only have a small annotated sample available, the tagger was evaluated on 11 modal verbs that are frequent in our corpus and that denote more than one modal meaning. Finally, we discuss several valuable insights into the complexity of the semantic concept of modality that derive from the process of manual annotation of the corpus and from the analysis of the results of the automatic labeling: ambiguity and the semantic and syntactic 1 2 / LiLT volume 14, issue 5 August 2016 properties typically associated to one modal meaning in context, and also the interaction of modality with negation and focus. The knowledge gained from the manual annotation task leads us to propose a new unified scheme for modality that applies to the two Portuguese varieties and covers both written and spoken data.
false
[]
[]
null
null
null
This work was partially supported by national funds through FCT -Fundação para a Ciência e Tecnologia, under project Pest-OE/EEI/ LA0021/2013 and project PEst-OE/LIN/UI0214/2013, and through FAPEMIG (PEE-00293-15).
2016
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
jing-etal-2018-automatic
https://aclanthology.org/P18-1240.pdf
On the Automatic Generation of Medical Imaging Reports
Medical imaging is widely used in clinical practice for diagnosis and treatment. Report-writing can be error-prone for unexperienced physicians, and timeconsuming and tedious for experienced physicians. To address these issues, we study the automatic generation of medical imaging reports. This task presents several challenges. First, a complete report contains multiple heterogeneous forms of information, including findings and tags. Second, abnormal regions in medical images are difficult to identify. Third, the reports are typically long, containing multiple sentences. To cope with these challenges, we (1) build a multi-task learning framework which jointly performs the prediction of tags and the generation of paragraphs, (2) propose a co-attention mechanism to localize regions containing abnormalities and generate narrations for them, (3) develop a hierarchical LSTM model to generate long paragraphs. We demonstrate the effectiveness of the proposed methods on two publicly available datasets.
true
[]
[]
Good Health and Well-Being
null
null
null
2018
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
false
mcdonald-1993-interplay
https://aclanthology.org/1993.iwpt-1.15.pdf
The Interplay of Syntactic and Semantic Node Labels in Partial Parsing
Our natural language comprehension system, "Sparser" , uses a semantic grammar in conjunc tion with a domain model that defines the categories and already-known individuals that can be expected in the sublanguages we are studying, the most significant of which to date has been articles from the Wall Street Journal's "Who's News" column. In this paper we describe the systematic use of default syntactic rules in this grammar: an alternative set of labels on consitu tents that are used to capture generalities in the semantic interpretation of constructions like the verbal auxiliaries or many adverbials. Syntactic rules form the basis of a set of schemas in a Tree Adjoining Grammar that are used as templates from which to create the primary, semantically labeled rules of the grammar as part of defining the categories in the domain models. This design permits the semantic grammar to be developed on a linguistically principled basis since all the rules must conform to syntactically sound patterns.
false
[]
[]
null
null
null
null
1993
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
damani-2013-improving
https://aclanthology.org/W13-3503.pdf
Improving Pointwise Mutual Information (PMI) by Incorporating Significant Co-occurrence
We design a new co-occurrence based word association measure by incorporating the concept of significant cooccurrence in the popular word association measure Pointwise Mutual Information (PMI). By extensive experiments with a large number of publicly available datasets we show that the newly introduced measure performs better than other co-occurrence based measures and despite being resource-light, compares well with the best known resource-heavy distributional similarity and knowledge based word association measures. We investigate the source of this performance improvement and find that of the two types of significant co-occurrence-corpus-level and document-level, the concept of corpus level significance combined with the use of document counts in place of word counts is responsible for all the performance gains observed. The concept of document level significance is not helpful for PMI adaptation.
false
[]
[]
null
null
null
We thank Dipak Chaudhari and Shweta Ghonghe for their help with the implementation.
2013
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
bhagat-etal-2005-statistical
https://aclanthology.org/W05-1520.pdf
Statistical Shallow Semantic Parsing despite Little Training Data
Natural language understanding is an essential module in any dialogue system. To obtain satisfactory performance levels, a dialogue system needs a semantic parser/natural language understanding system (NLU) that produces accurate and detailed dialogue oriented semantic output. Recently, a number of semantic parsers trained using either the FrameNet (Baker et al., 1998) or the Prop-Bank (Kingsbury et al., 2002) have been reported. Despite their reasonable performances on general tasks, these parsers do not work so well in specific domains. Also, where these general purpose parsers tend to provide case-frame structures, that include the standard core case roles (Agent, Patient, Instrument, etc.), dialogue oriented domains tend to require additional information about addressees, modality, speech acts, etc. Where general-purpose resources such as PropBank and Framenet provide invaluable training data for general case, it tends to be a problem to obtain enough training data in a specific dialogue oriented domain.
false
[]
[]
null
null
null
null
2005
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
ion-etal-2019-racais
https://aclanthology.org/D19-5714.pdf
RACAI's System at PharmaCoNER 2019
This paper describes the Named Entity Recognition system of the Institute for Artificial Intelligence "Mihai Drȃgȃnescu" of the Romanian Academy (RACAI for short). Our best F1 score of 0.84984 was achieved using an ensemble of two systems: a gazetteer-based baseline and a RNN-based NER system, developed specially for PharmaCoNER 2019. We will describe the individual systems and the ensemble algorithm, compare the final system to the current state of the art, as well as discuss our results with respect to the quality of the training data and its annotation strategy. The resulting NER system is language independent, provided that language-dependent resources and preprocessing tools exist, such as tokenizers and POS taggers.
false
[]
[]
null
null
null
The reported research was supported by the EC grant MARCELL (Multilingual Resources for CEF.AT in the Legal Domain), TENtec no. 27798023.
2019
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
yang-berwick-1996-principle
https://aclanthology.org/Y96-1038.pdf
Principle-based Parsing for Chinese
This paper describes the implementation of Mandarin Chinese in the Pappi system, a principle-based multilingual parser. We show that substantive linguistic coverage for new and linguistically diverse languages such as Chinese can be achieved, conveniently and efficiently, through parameterization and minimal modifications to a core system. In particular, we focus on two problems that have posed hurdles for Chinese linguistic theories. A a novel analysis is proposed for the so-called BA-construction, along with a principled computer implementation. For scoping ambiguity, we developed a simple algorithm based on Jim Huang's Isomorphic Principle. The implementation can parse fairly sophisticated sentences in a couple of seconds, with minimal addition (less than 100 lines of Prolog code) to the core parser. This study suggests that principle-based parsing systems are useful tools for theoretical and computational analysis of linguistic problems.
false
[]
[]
null
null
null
null
1996
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
kocisky-etal-2014-learning
https://aclanthology.org/P14-2037.pdf
Learning Bilingual Word Representations by Marginalizing Alignments
We present a probabilistic model that simultaneously learns alignments and distributed representations for bilingual data. By marginalizing over word alignments the model captures a larger semantic context than prior work relying on hard alignments. The advantage of this approach is demonstrated in a cross-lingual classification task, where we outperform the prior published state of the art.
false
[]
[]
null
null
null
This work was supported by a Xerox Foundation Award and EPSRC grant number EP/K036580/1. We acknowledge the use of the Oxford ARC.
2014
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
fehri-etal-2011-new
https://aclanthology.org/R11-1076.pdf
A New Representation Model for the Automatic Recognition and Translation of Arabic Named Entities with NooJ
Recognition and translation of named entities (NEs) are two current research topics with regard to the proliferation of electronic documents exchanged through the Internet. The need to assimilate these documents through NLP tools has become necessary and interesting. Moreover, the formal or semiformal modeling of these NEs may intervene in both processes of recognition and translation. Indeed, the modeling makes more reliable the constitution of linguistic resources, limits the impact of linguistic specificities and facilitates transformations from one representation to another. In this context, we propose an approach of recognition and translation based on a representation model of Arabic NEs and a set of transducers resolving morphological and syntactical phenomena.
false
[]
[]
null
null
null
null
2011
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
bates-1989-summary
https://aclanthology.org/H89-2029.pdf
Summary of Session 7 -- Natural Language (Part 2)
In this session, Ralph Weischedel of BBN reported on work advancing the state of the art in multiple underlying systems, i.e., translating an understood query or command into a program to produce an answer from one or more application systems. This work addresses one of the key bottlenecks to making NL (and speech) systems truly applicable. Systematic translation techniques from logical form of an English input to commands to carry out the request have previously been worked out only for relational databases, but is extended here in both number of underlying systems and their type.
false
[]
[]
null
null
null
null
1989
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
tattar-fishel-2017-bleu2vec
https://aclanthology.org/W17-4771.pdf
bleu2vec: the Painfully Familiar Metric on Continuous Vector Space Steroids
In this participation in the WMT'2017 metrics shared task we implement a fuzzy match score for n-gram precisions in the BLEU metric. To do this we learn ngram embeddings; we describe two ways of extending the WORD2VEC approach to do so. Evaluation results show that the introduced score beats the original BLEU metric on system and segment level.
false
[]
[]
null
null
null
null
2017
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
agrawal-an-2014-kea
https://aclanthology.org/S14-2065.pdf
Kea: Sentiment Analysis of Phrases Within Short Texts
Sentiment Analysis has become an increasingly important research topic. This paper describes our approach to building a system for the Sentiment Analysis in Twitter task of the SemEval-2014 evaluation. The goal is to classify a phrase within a short piece of text as positive, negative or neutral. In the evaluation, classifiers trained on Twitter data are tested on data from other domains such as SMS, blogs as well as sarcasm. The results indicate that apart from sarcasm, classifiers built for sentiment analysis of phrases from tweets can be generalized to other short text domains quite effectively. However, in crossdomain experiments, SMS data is found to generalize even better than Twitter data.
false
[]
[]
null
null
null
We would like to thank the organizers of this task for their effort and the reviewers for their useful feedback. This research is funded in part by the Centre for Information Visualization and Data Driven Design (CIV/DDD) established by the Ontario Research Fund.
2014
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
lochbaum-1991-algorithm
https://aclanthology.org/P91-1005.pdf
An Algorithm for Plan Recognition in Collaborative Discourse
A model of plan recognition in discourse must be based on intended recognition, distinguish each agent's beliefs and intentions from the other's, and avoid assumptions about the correctness or completeness of the agents' beliefs. In this paper, we present an algorithm for plan recognition that is based on the Shared-Plan model of collaboration (Grosz and Sidner, 1990; Lochbaum et al., 1990) and that satisfies these constraints.
true
[]
[]
Partnership for the goals
null
null
I would like to thank Cecile Balkanski, Barbara Grosz, Stuart Shieber, and Candy Sidner for many helpful discussions and comments on the research presented in this paper.
1991
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
true
yuste-2004-corporate
https://aclanthology.org/W04-1401.pdf
Corporate Language Resources in Multilingual Content Creation, Maintenance and Leverage
This paper focuses on how language resources (LR) for translation (hence LR4Trans) feature, and should ideally feature, within a corporate workflow of multilingual content development. The envisaged scenario will be that of a content management system that acknowledges the value of LR4Trans in the organisation as a key component and corporate knowledge resource.
false
[]
[]
null
null
null
My special thanks go to the two blind reviewers of this paper's first draft. I would also like to thank my colleagues at the Institute for Computational Linguistics of the University of Zurich for their interesting questions during a recent presentation.
2004
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
ma-li-2006-comparative
https://aclanthology.org/O06-3004.pdf
A Comparative Study of Four Language Identification Systems
In this paper, we compare four typical spoken language identification (LID) systems. We introduce a novel acoustic segment modeling approach for the LID system frontend. It is assumed that the overall sound characteristics of all spoken languages can be covered by a universal collection of acoustic segment models (ASMs) without imposing strict phonetic definitions. The ASM models are used to decode spoken utterances into strings of segment units in parallel phone recognition (PPR) and universal phone recognition (UPR) frontends. We also propose a novel approach to LID system backend design, where the statistics of ASMs and their co-occurrences are used to form ASM-derived feature vectors, in a vector space modeling (VSM) approach, as opposed to the traditional language modeling (LM) approach, in order to discriminate between individual spoken languages. Four LID systems are built to evaluate the effects of two different frontends and two different backends. We evaluate the four systems based on the 1996, 2003 and 2005 NIST Language Recognition Evaluation (LRE) tasks. The results show that the proposed ASM-based VSM framework reduces the LID error rate quite significantly when compared with the widely-used parallel PRLM method. Among the four configurations, the PPR-VSM system demonstrates the best performance across all of the tasks.
false
[]
[]
null
null
null
We have successfully treated LID as a text categorization application with the topic category being the language identity itself. The VSM method can be extended to other spoken document classification tasks as well, for example, multilingual spoken document categorization by topic. We are also interested in exploring other language-specific features, such as syllabic and tonal properties. It is quite straightforward to incorporate specific salient features and examine their benefits. Furthermore, some high-frequency, language-specific words can also be converted into acoustic words and included in an acoustic word vocabulary, in order to increase the indexing power of these words for their corresponding languages.
2006
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
simov-etal-2014-system
http://www.lrec-conf.org/proceedings/lrec2014/pdf/1005_Paper.pdf
A System for Experiments with Dependency Parsers
In this paper we present a system for experimenting with combinations of dependency parsers. The system supports initial training of different parsing models, creation of parsebank(s) with these models, and different strategies for the construction of ensemble models aimed at improving the output of the individual models by voting. The system employs two algorithms for construction of dependency trees from several parses of the same sentence and several ways for ranking of the arcs in the resulting trees. We have performed experiments with state-of-the-art dependency parsers including MaltParser (
false
[]
[]
null
null
null
This research has received partial funding from the EC's FP7 (FP7/2007-2013) under grant agreement number 610516: "QTLeap: Quality Translation by Deep Language Engineering Approaches".
2014
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
sun-etal-2018-open
https://aclanthology.org/D18-1455.pdf
Open Domain Question Answering Using Early Fusion of Knowledge Bases and Text
Open Domain Question Answering (QA) is evolving from complex pipelined systems to end-to-end deep neural networks. Specialized neural models have been developed for extracting answers from either text alone or Knowledge Bases (KBs) alone. In this paper we look at a more practical setting, namely QA over the combination of a KB and entitylinked text, which is appropriate when an incomplete KB is available with a large text corpus. Building on recent advances in graph representation learning we propose a novel model, GRAFT-Net, for extracting answers from a question-specific subgraph containing text and KB entities and relations. We construct a suite of benchmark tasks for this problem, varying the difficulty of questions, the amount of training data, and KB completeness. We show that GRAFT-Net is competitive with the state-of-the-art when tested using either KBs or text alone, and vastly outperforms existing methods in the combined setting.
false
[]
[]
null
null
null
Bhuwan Dhingra is supported by NSF under grants CCF-1414030 and IIS-1250956 and by grants from Google. Ruslan Salakhutdinov is supported in part by ONR grant N000141812861, Apple, and Nvidia NVAIL Award.
2018
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
fonseca-etal-2016-lexfom
https://aclanthology.org/W16-5320.pdf
Lexfom: a lexical functions ontology model
A lexical function represents a type of relation that exists between lexical units (wo rds or expressions) in any language. For examp le, the antonymy is a type of relat ion that is represented by the lexical function Anti: Anti(big) = small. Those relations include both paradigmatic relations, i.e. vertical relations, such as synonymy, antonymy and meronymy and syntagmatic relat ions, i.e. horizontal relations, such as objective qualification (legitimate demand), subjective qualificat ion (fruitful analysis), positive evaluation (good review) and support verbs (pay a visit, subject to an interrogation). In this paper, we present the Lexical Functions Ontology Model (lexfo m) to represent lexical functions and the relation among lexical units. Lexfo m is divided in four modules: lexical function representation (lfrep), lexical function family (lffam), lexical function semantic perspective (lfsem) and lexical function relations (lfrel). Moreover, we show how it co mbines to Lexical Model for Ontologies (lemon), for the transformation of lexical networks into the semantic web formats. So far, we have implemented 100 simp le and 500 comp lex lexical functions, and encoded about 8,000 syntagmatic and 46,000 paradigmat ic relations, for the French language.
false
[]
[]
null
null
null
null
2016
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
bjorne-salakoski-2011-generalizing
https://aclanthology.org/W11-1828.pdf
Generalizing Biomedical Event Extraction
We present a system for extracting biomedical events (detailed descriptions of biomolecular interactions) from research articles. This system was developed for the BioNLP'11 Shared Task and extends our BioNLP'09 Shared Task winning Turku Event Extraction System. It uses support vector machines to first detect event-defining words, followed by detection of their relationships. The theme of the BioNLP'11 Shared Task is generalization, extending event extraction to varied biomedical domains. Our current system successfully predicts events for every domain case introduced in the BioNLP'11 Shared Task, being the only system to participate in all eight tasks and all of their subtasks, with best performance in four tasks.
true
[]
[]
Good Health and Well-Being
null
null
We thank the Academy of Finland for funding, CSC -IT Center for Science Ltd for computational resources and Filip Ginter and Sofie Van Landeghem for help with the manuscript.
2011
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
false
lee-etal-2020-massively
https://aclanthology.org/2020.lrec-1.521.pdf
Massively Multilingual Pronunciation Modeling with WikiPron
We introduce WikiPron, an open-source command-line tool for extracting pronunciation data from Wiktionary, a collaborative multilingual online dictionary. We first describe the design and use of WikiPron. We then discuss the challenges faced scaling this tool to create an automatically-generated database of 1.7 million pronunciations from 165 languages. Finally, we validate the pronunciation database by using it to train and evaluating a collection of generic grapheme-to-phoneme models. The software, pronunciation data, and models are all made available under permissive open-source licenses.
false
[]
[]
null
null
null
We thank the countless Wiktionary contributors and editors without whom this work would have been impossible.
2020
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
joshi-etal-2020-dr
https://aclanthology.org/2020.findings-emnlp.335.pdf
Dr. Summarize: Global Summarization of Medical Dialogue by Exploiting Local Structures.
Understanding a medical conversation between a patient and a physician poses unique natural language understanding challenge since it combines elements of standard open-ended conversation with very domainspecific elements that require expertise and medical knowledge. Summarization of medical conversations is a particularly important aspect of medical conversation understanding since it addresses a very real need in medical practice: capturing the most important aspects of a medical encounter so that they can be used for medical decision making and subsequent follow ups. In this paper we present a novel approach to medical conversation summarization that leverages the unique and independent local structures created when gathering a patient's medical history. Our approach is a variation of the pointer generator network where we introduce a penalty on the generator distribution, and we explicitly model negations. The model also captures important properties of medical conversations such as medical knowledge coming from standardized medical ontologies better than when those concepts are introduced explicitly. Through evaluation by doctors, we show that our approach is preferred on twice the number of summaries to the baseline pointer generator model and captures most or all of the information in 80% of the conversations making it a realistic alternative to costly manual summarization by medical experts.
true
[]
[]
Good Health and Well-Being
null
null
null
2020
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
false
sanchez-martinez-etal-2020-english
https://aclanthology.org/2020.eamt-1.32.pdf
An English-Swahili parallel corpus and its use for neural machine translation in the news domain
This paper describes our approach to create a neural machine translation system to translate between English and Swahili (both directions) in the news domain, as well as the process we followed to crawl the necessary parallel corpora from the Internet. We report the results of a pilot human evaluation performed by the news media organisations participating in the H2020 EU-funded project GoURMET.
false
[]
[]
null
null
null
Acknowledgements: Work funded by the European Union's Horizon 2020 research and innovation programme under grant agreement number 825299, project Global Under-Resourced Media Translation (GoURMET). We thank the editors of the SAWA corpus for letting us use it for training. We also thank Wycliffe Muia (BBC) for help with Swahili examples and DW for helping in the manual evaluation.
2020
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
farreres-rodriguez-2004-selecting
http://www.lrec-conf.org/proceedings/lrec2004/pdf/324.pdf
Selecting the Correct English Synset for a Spanish Sense
This work tries to enrich the Spanish Wordnet using a Spanish taxonomy as a knowledge source. The Spanish taxonomy is composed by Spanish senses, while Spanish Wordnet is composed by synsets, mostly linked to English WordNet. A set of weighted associations between Spanish words and Wordnet synsets is used for inferring associations between both taxonomies.
false
[]
[]
null
null
null
null
2004
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
peitz-etal-2013-rwth
https://aclanthology.org/W13-2224.pdf
The RWTH Aachen Machine Translation System for WMT 2013
This paper describes the statistical machine translation (SMT) systems developed at RWTH Aachen University for the translation task of the ACL 2013 Eighth Workshop on Statistical Machine Translation (WMT 2013). We participated in the evaluation campaign for the French-English and German-English language pairs in both translation directions. Both hierarchical and phrase-based SMT systems are applied. A number of different techniques are evaluated, including hierarchical phrase reordering, translation model interpolation, domain adaptation techniques, weighted phrase extraction, word class language model, continuous space language model and system combination. By application of these methods we achieve considerable improvements over the respective baseline systems.
false
[]
[]
null
null
null
This work was achieved as part of the Quaero Programme, funded by OSEO, French State agency for innovation.
2013
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
gokce-etal-2020-embedding
https://aclanthology.org/2020.acl-demos.36.pdf
Embedding-based Scientific Literature Discovery in a Text Editor Application
Each claim in a research paper requires all relevant prior knowledge to be discovered, assimilated, and appropriately cited. However, despite the availability of powerful search engines and sophisticated text editing software, discovering relevant papers and integrating the knowledge into a manuscript remain complex tasks associated with high cognitive load. To define comprehensive search queries requires strong motivation from authors, irrespective of their familiarity with the research field. Moreover, switching between independent applications for literature discovery, bibliography management, reading papers, and writing text burdens authors further and interrupts their creative process. Here, we present a web application that combines text editing and literature discovery in an interactive user interface. The application is equipped with a search engine that couples Boolean keyword filtering with nearest neighbor search over text embeddings, providing a discovery experience tuned to an author's manuscript and his interests. Our application aims to take a step towards more enjoyable and effortless academic writing. The demo of the application 1 and a short video tutorial 2 are available online.
true
[]
[]
Industry, Innovation and Infrastructure
null
null
We acknowledge support from the Swiss National Science Foundation (grant 31003A 156976). We also thank the anonymous reviewers for their useful comments.
2020
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
zhang-choi-2021-situatedqa
https://aclanthology.org/2021.emnlp-main.586.pdf
SituatedQA: Incorporating Extra-Linguistic Contexts into QA
Answers to the same question may change depending on the extra-linguistic contexts (when and where the question was asked). To study this challenge, we introduce SITUATEDQA, an open-retrieval QA dataset where systems must produce the correct answer to a question given the temporal or geographical context. To construct SITUATEDQA, we first identify such questions in existing QA datasets. We find that a significant proportion of information seeking questions have context-dependent answers (e.g. roughly 16.5% of NQ-Open). For such context-dependent questions, we then crowdsource alternative contexts and their corresponding answers. Our study shows that existing models struggle with producing answers that are frequently updated or from uncommon locations. We further quantify how existing models, which are trained on data collected in the past, fail to generalize to answering questions asked in the present, even when provided with an updated evidence corpus (a roughly 15 point drop in accuracy). Our analysis suggests that open-retrieval QA benchmarks should incorporate extra-linguistic context to stay relevant globally and in the future. Our data, code, and datasheet are available at https: //situatedqa.github.io/.
false
[]
[]
null
null
null
We would like to thank Sewon Min, Raymond Mooney, and members of UT NLP group for comments and discussions. The work is partially funded by Google Faculty Awards.
2021
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
boella-etal-2012-nlp
http://www.lrec-conf.org/proceedings/lrec2012/pdf/1035_Paper.pdf
NLP Challenges for Eunomos a Tool to Build and Manage Legal Knowledge
In this paper, we describe how NLP can semi-automate the construction and analysis of knowledge in Eunomos, a legal knowledge management service which enables users to view legislation from various sources and find the right definitions and explanations of legal concepts in a given context. NLP can semi-automate some routine tasks currently performed by knowledge engineers, such as classifying norms, or linking key terms within legislation to ontological concepts. This helps overcome the resource bottleneck problem of creating specialist knowledge management systems. While accuracy is of the utmost importance in the legal domain, and the information should be verified by domain experts as a matter of course, a semi-automated approach can result in considerable efficiency gains.
true
[]
[]
Peace, Justice and Strong Institutions
null
null
null
2012
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
true
false
wang-etal-2020-automated
https://aclanthology.org/2020.bea-1.18.pdf
Automated Scoring of Clinical Expressive Language Evaluation Tasks
Many clinical assessment instruments used to diagnose language impairments in children include a task in which the subject must formulate a sentence to describe an image using a specific target word. Because producing sentences in this way requires the speaker to integrate syntactic and semantic knowledge in a complex manner, responses are typically evaluated on several different dimensions of appropriateness yielding a single composite score for each response. In this paper, we present a dataset consisting of non-clinically elicited responses for three related sentence formulation tasks, and we propose an approach for automatically evaluating their appropriateness. Using neural machine translation, we generate correct-incorrect sentence pairs to serve as synthetic data in order to increase the amount and diversity of training data for our scoring model. Our scoring model uses transfer learning to facilitate automatic sentence appropriateness evaluation. We further compare custom word embeddings with pre-trained contextualized embeddings serving as features for our scoring model. We find that transfer learning improves scoring accuracy, particularly when using pre-trained contextualized embeddings.
true
[]
[]
Good Health and Well-Being
null
null
We thank Beth Calamé, Julie Bird, Kristin Hinton, Christine Yang, and Emily Fabius for their contributions to data collection and annotation. This work was supported in part by NIH NIDCD awards R01DC012033 and R21DC017000. Any opinions, findings, conclusions, or recommendations expressed in this publication are those of the authors and do not necessarily reflect the views of the NIH or NIDCD.
2020
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
false
luque-infante-lopez-2009-upper
https://aclanthology.org/W09-1009.pdf
Upper Bounds for Unsupervised Parsing with Unambiguous Non-Terminally Separated Grammars
Unambiguous Non-Terminally Separated (UNTS) grammars have properties that make them attractive for grammatical inference. However, these properties do not state the maximal performance they can achieve when they are evaluated against a gold treebank that is not produced by an UNTS grammar. In this paper we investigate such an upper bound. We develop a method to find an upper bound for the unlabeled F 1 performance that any UNTS grammar can achieve over a given treebank. Our strategy is to characterize all possible versions of the gold treebank that UNTS grammars can produce and to find the one that optimizes a metric we define. We show a way to translate this score into an upper bound for the F 1. In particular, we show that the F 1 parsing score of any UNTS grammar can not be beyond 82.2% when the gold treebank is the WSJ10 corpus.
false
[]
[]
null
null
null
This work was supported in part by grant PICT 2006-00969, ANPCyT, Argentina. We would like to thank Pablo Rey (UDP, Chile) for his help with ILP, and Demetrio Martín Vilela (UNC, Argentina) for his detailed review.
2009
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
scarton-specia-2014-exploring
https://aclanthology.org/W14-3343.pdf
Exploring Consensus in Machine Translation for Quality Estimation
This paper presents the use of consensus among Machine Translation (MT) systems for the WMT14 Quality Estimation shared task. Consensus is explored here by comparing the MT system output against several alternative machine translations using standard evaluation metrics. Figures extracted from such metrics are used as features to complement baseline prediction models. The hypothesis is that knowing whether the translation of interest is similar or dissimilar to translations from multiple different MT systems can provide useful information regarding the quality of such a translation.
false
[]
[]
null
null
null
Acknowledgements: This work was supported by the EXPERT (EU Marie Curie ITN No. 317471) project.
2014
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
maslennikov-etal-2006-instance
https://aclanthology.org/P06-2074.pdf
ARE: Instance Splitting Strategies for Dependency Relation-Based Information Extraction
Information Extraction (IE) is a fundamental technology for NLP. Previous methods for IE were relying on co-occurrence relations, soft patterns and properties of the target (for example, syntactic role), which result in problems of handling paraphrasing and alignment of instances. Our system ARE (Anchor and Relation) is based on the dependency relation model and tackles these problems by unifying entities according to their dependency relations, which we found to provide more invariant relations between entities in many cases. In order to exploit the complexity and characteristics of relation paths, we further classify the relation paths into the categories of 'easy', 'average' and 'hard', and utilize different extraction strategies based on the characteristics of those categories. Our extraction method leads to improvement in performance by 3% and 6% for MUC4 and MUC6 respectively as compared to the state-of-art IE systems.
false
[]
[]
null
null
null
null
2006
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
hovy-etal-2013-learning
https://aclanthology.org/N13-1132.pdf
Learning Whom to Trust with MACE
Non-expert annotation services like Amazon's Mechanical Turk (AMT) are cheap and fast ways to evaluate systems and provide categorical annotations for training data. Unfortunately, some annotators choose bad labels in order to maximize their pay. Manual identification is tedious, so we experiment with an item-response model. It learns in an unsupervised fashion to a) identify which annotators are trustworthy and b) predict the correct underlying labels. We match performance of more complex state-of-the-art systems and perform well even under adversarial conditions. We show considerable improvements over standard baselines, both for predicted label accuracy and trustworthiness estimates. The latter can be further improved by introducing a prior on model parameters and using Variational Bayes inference. Additionally, we can achieve even higher accuracy by focusing on the instances our model is most confident in (trading in some recall), and by incorporating annotated control instances. Our system, MACE (Multi-Annotator Competence Estimation), is available for download 1 .
false
[]
[]
null
null
null
The authors would like to thank Chris Callison-Burch, Victoria Fossum, Stephan Gouws, Marc Schulder, Nathan Schneider, and Noah Smith for invaluable discussions, as well as the reviewers for their constructive feedback.
2013
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
chapin-1982-acl
https://aclanthology.org/P82-1024.pdf
ACL in 1977
As I leaf through my own "ACL (Historical)" file (which, I am frightened to observe, goes back to the Fourth Annual Meeting, in 1966) , and focus in particular on 1977, when I was President, it strikes me that pretty much everything significant that happened in the Association that year was the work of other people. Don Walker was completing the mammoth task of transferring all of the ACL's records from the East Coast to the West, paying off our indebtedness to the Center for Applied Linguistics, and in general getting the Association onto the firm financial and organizational footing which it has enjoyed to this day. Dave Hays was seeing to it that the microfiche journal kept on coming, and George Heldorn Joined him as Associate Editor that year to begin the move toward hard copy publication.
false
[]
[]
null
null
null
null
1982
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
yaghoobzadeh-schutze-2017-multi
https://aclanthology.org/E17-1055.pdf
Multi-level Representations for Fine-Grained Typing of Knowledge Base Entities
Entities are essential elements of natural language. In this paper, we present methods for learning multi-level representations of entities on three complementary levels: character (character patterns in entity names extracted, e.g., by neural networks), word (embeddings of words in entity names) and entity (entity embeddings). We investigate state-of-theart learning methods on each level and find large differences, e.g., for deep learning models, traditional ngram features and the subword model of fasttext (Bojanowski et al., 2016) on the character level; for word2vec (Mikolov et al., 2013) on the word level; and for the order-aware model wang2vec (Ling et al., 2015a) on the entity level. We confirm experimentally that each level of representation contributes complementary information and a joint representation of all three levels improves the existing embedding based baseline for fine-grained entity typing by a large margin. Additionally, we show that adding information from entity descriptions further improves multi-level representations of entities.
false
[]
[]
null
null
null
Acknowledgments. This work was supported by DFG (SCHU 2246/8-2).
2017
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
qian-etal-2010-python
http://www.lrec-conf.org/proceedings/lrec2010/pdf/30_Paper.pdf
A Python Toolkit for Universal Transliteration
We describe ScriptTranscriber, an open source toolkit for extracting transliterations in comparable corpora from languages written in different scripts. The system includes various methods for extracting potential terms of interest from raw text, for providing guesses on the pronunciations of terms, and for comparing two strings as possible transliterations using both phonetic and temporal measures. The system works with any script in the Unicode Basic Multilingual Plane and is easily extended to include new modules. Given comparable corpora, such as newswire text, in a pair of languages that use different scripts, ScriptTranscriber provides an easy way to mine transliterations from the comparable texts. This is particularly useful for underresourced languages, where training data for transliteration may be lacking, and where it is thus hard to train good transliterators. ScriptTranscriber provides an open source package that allows for ready incorporation of more sophisticated modules-e.g. a trained transliteration model for a particular language pair.
false
[]
[]
null
null
null
null
2010
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
cardoso-2012-rembrandt
http://www.lrec-conf.org/proceedings/lrec2012/pdf/409_Paper.pdf
Rembrandt - a named-entity recognition framework
Rembrandt is a named entity recognition system specially crafted to annotate documents by classifying named entities and ground them into unique identifiers. Rembrandt played an important role within our research over geographic IR, thus evolving into a more capable framework where documents can be annotated, manually curated and indexed. The goal of this paper is to present Rembrandt's simple but powerful annotation framework to the NLP community.
false
[]
[]
null
null
null
This work is supported by FCT for its LASIGE Multi-annual support, GREASE-II project (grant PTDC/EIA/73614/2006) and a PhD scholarship grant SFRH/BD/45480/2008, and by the Portuguese Government, the European Union (FEDER and FSE) through the Linguateca project, under contract ref.POSC/339/1.3/C/NAC, UMIC and FCCN.
2012
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
uszkoreit-2012-quality
https://aclanthology.org/F12-4001.pdf
Quality Translation for a Multilingual Continent - Priorities and Chances for European MT Research
Recent progress in translation technology has caused a real boost for research and technology deployment. At the same time, other areas of language technology also experience scientific advances and economic success stories. However, research in machine translation is still less affected by new developments in core areas of language processing than could be expected. One reason for the low level of interaction is certainly that the predominant research paradigm in MT has not started yet to systematically concentrate on high quality translation. Most of the research and nearly all of the application efforts have focused on solutions for informational inbound translation (assimilation MT). This focus has on the one hand enabled translation of information that normally is not translated at all. In this way MT has changed work and life of many people without ever infringing on the existing translation markets. In my talk I will present a new research approach dedicated to the analytical investigation of existing quality barriers. Such a systematic thrust can serve as the basis of scientifically guided combinations of technologies including hybrid approaches to transfer and the integration of advanced methods for syntactic and semantic processing into the translation process. Together with improved techniques for quality estimation, the expected results will drive translation technology into the direction badly needed by the multilingual European society.
true
[]
[]
Industry, Innovation and Infrastructure
null
null
null
2012
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
schuster-etal-2020-stochastic
https://aclanthology.org/2020.pam-1.11.pdf
Stochastic Frames
In the frame hypothesis (Barsalou, 1992; Löbner, 2014), human concepts are equated with frames, which extend feature lists by a functional structure consisting of attributes and values. For example, a bachelor is represented by the attributes GENDER and MARITAL STATUS and their values 'male' and 'unwed'. This paper makes the point that for many applications of concepts in cognition, including for concepts to be associated with lexemes in natural languages, the right structures to assume are not merely frames but stochastic frames in which attributes are associated with (conditional) probability distributions over values. The paper introduces the idea of stochastic frames and three applications of this idea: vagueness, ambiguity, and typicality.
false
[]
[]
null
null
null
This research was funded by the German Research Foundation (DFG) funded project: CRC 991 The Structure of Representations in Language, Cognition, and Science, specifically projects C09, D01 and a Mercator Fellowship awarded to Henk Zeevat. We would like to thank audiences at CoST 2019 at HHU Düsseldorf, the workshop on Records, Frames, and Attribute Spaces held at ZAS in Berlin, March 2018, and the Workshop on Uncertainty in Meaning and Representation in Linguistics and Philosophy held in Jelenia Góra, Poland, February, 2018.
2020
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
akram-hussain-2010-word
https://aclanthology.org/W10-3212.pdf
Word Segmentation for Urdu OCR System
This paper presents a technique for word segmentation for the Urdu OCR system. Word segmentation or word tokenization is a preliminary task for Urdu language processing. Several techniques are available for word segmentation in other languages. A methodology is proposed for word segmentation in this paper which determines the boundaries of words given a sequence of ligatures, based on collocation of ligatures and words in the corpus. Using this technique, word identification rate of 96.10% is achieved, using trigram probabilities normalized over the number of ligatures and words in the sequence.
false
[]
[]
null
null
null
null
2010
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
richardson-kuhn-2014-unixman
http://www.lrec-conf.org/proceedings/lrec2014/pdf/823_Paper.pdf
UnixMan Corpus: A Resource for Language Learning in the Unix Domain
We present a new resource, the UnixMan Corpus, for studying language learning it the domain of Unix utility manuals. The corpus is built by mining Unix (and other Unix related) man pages for parallel example entries, consisting of English textual descriptions with corresponding command examples. The commands provide a grounded and ambiguous semantics for the textual descriptions, making the corpus of interest to work on Semantic Parsing and Grounded Language Learning. In contrast to standard resources for Semantic Parsing, which tend to be restricted to a small number of concepts and relations, the UnixMan Corpus spans a wide variety of utility genres and topics, and consists of hundreds of command and domain entity types. The semi-structured nature of the manuals also makes it easy to exploit other types of relevant information for Grounded Language Learning. We describe the details of the corpus and provide preliminary classification results.
false
[]
[]
null
null
null
null
2014
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
huber-hinrichs-2019-including
https://aclanthology.org/2019.gwc-1.4.pdf
Including Swiss Standard German in GermaNet
GermaNet (Henrich and Hinrichs, 2010; Hamp and Feldweg, 1997) is a comprehensive wordnet of Standard German spoken in the Federal Republic of Germany. The GermaNet team aims at modelling the basic vocabulary of the language. German is an official language or a minority language in many countries. It is an official language in Austria, Germany and Switzerland, each with its own codified standard variety (Auer, 2014, p. 21), and also in Belgium, Liechtenstein, and Luxemburg. German is recognized as a minority language in thirteen additional countries, including Brasil, Italy, Poland, and Russia. However, the different standard varieties of German are currently not represented in GermaNet. With this project, we make a start on changing this by including one variety, namely Swiss Standard German, into GermaNet. This shall give a more inclusive perspective on the German language. We will argue that Swiss Standard German words, Helvetisms, are best included into the already existing wordnet GermaNet, rather than creating them as a separate wordnet.
false
[]
[]
null
null
null
We thank Reinhild Barkey, Ç agrı Çöltekin and Christiane Fellbaum for providing insight and expertise from which this project has greatly benefitted. Furthermore, we gratefully acknowledge the financial support of our research by the German Ministry for Education and Research (BMBF) as part of the CLARIN-D research infrastructure grant given to the University of Tübingen.
2019
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
huang-2013-social
https://aclanthology.org/W13-4203.pdf
Social Metaphor Detection via Topical Analysis
With massive social media data, e.g., comments, blog articles, or tweets, become available, there is a rising interest towards automatic metaphor detection from open social text. One of the most well-known approaches is detecting the violation of selectional preference. The idea of selectional preference is that verbs tend to have semantic preferences of their arguments. If we find that in some text, any arguments of these predicates are not of their preferred semantic classes, and it's very likely to be a metaphor. However, previously only few papers have focuses on leveraging topical analysis techniques in metaphor detection. Intuitively, both predicates and arguments exhibit strong tendencies towards a few specific topics, and this topical information provides additional evidence to facilitate identification of selectional preference among text. In this paper, we study how the metaphor detection technique can be influenced by topical analysis techniques based on our proposed threestep approach. We formally define the problem, and propose our approach for metaphor detection, and then we conduct experiments on a real-world data set. Though our experimental result shows that topics do not have strong impacts on the metaphor detection techniques, we analyze the result and present some insights based on our study.
false
[]
[]
null
null
null
Supported by the Intelligence Advanced Research Projects Activity (IARPA) via Department of Defense US Army Research Laboratory contract number W911NF-12-C-0020. The U.S. Government is authorized to reproduce and distribute reprints for Governmental purposes notwithstanding any copyright annotation thereon. Disclaimer: The views and conclusions contained herein are those of the authors and should not be interpreted as necessarily representing the official policies or endorsements, either expressed or implied, of IARPA, DoD/ARL, or the U.S. Government.We would also like to thank Zi Yang for his help of the topical analysis experiments, Teruko Mitamura and Eric Nyberg for their instructions, and Yi-Chia Wang and Dong Nguyen for the work of data collection.
2013
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
bhattacharja-2010-benglish
https://aclanthology.org/Y10-1011.pdf
Benglish Verbs: A Case of Code-mixing in Bengali
In this article, we show how grammar can account for Benglish verbs, a particular type of complex predicate, which are constituted of an English word and a Bengali verb (e.g. /EksiDenT kOra/ 'to have an accident', /in kOra/ 'to get/come/put in' or /kOnfuz kOra/ 'to confuse'). We analyze these verbs in the light of a couple of models (e.g. Kageyama, 1991; Lieber, 1992; Matsumoto, 1996) which claim that complex predicates are necessarily formed in syntax. However, Benglish verbs like /in kOra/ or /kOnfuz kOra/ are problematic for these approaches because it is unclear how preposition in or flexional verb confuse can appear as the arguments of the verb /kOra/ 'to do' in an underlying syntactic structure. We claim that all Benglish verbs can be satisfactorily handled in Morphology in the light of Whole Word Morphology (Ford et al., 1997 and Singh, 2006).
false
[]
[]
null
null
null
null
2010
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
dethlefs-2011-bremen
https://aclanthology.org/W11-2847.pdf
The Bremen System for the GIVE-2.5 Challenge
This paper presents the Bremen system for the GIVE-2.5 challenge. It is based on decision trees learnt from new annotations of the GIVE corpus augmented with manually specified rules. Surface realisation is based on context-free grammars. The paper will address advantages and shortcomings of the approach and discuss how the present system can serve as a baseline for a future evaluation with an improved version using hierarchical reinforcement learning with graphical models.
false
[]
[]
null
null
null
Thanks to the German Research Foundation DFG and the Transregional Collaborative Research Centre SFB/TR8 'Spatial Cognition' for partial support.
2011
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
iomdin-etal-2013-linguistic
https://aclanthology.org/W13-3402.pdf
Linguistic Problems Based on Text Corpora
The paper is focused on self-contained linguistic problems based on text corpora. We argue that corpus-based problems differ from traditional linguistic problems because they make it possible to represent language variation. Furthermore, they often require basic statistical thinking from the students. The practical value of using data obtained from text corpora for teaching linguistics through linguistic problems is shown.
false
[]
[]
null
null
null
null
2013
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
ninomiya-etal-2009-deterministic
https://aclanthology.org/E09-1069.pdf
Deterministic Shift-Reduce Parsing for Unification-Based Grammars by Using Default Unification
Many parsing techniques including parameter estimation assume the use of a packed parse forest for efficient and accurate parsing. However, they have several inherent problems deriving from the restriction of locality in the packed parse forest. Deterministic parsing is one of solutions that can achieve simple and fast parsing without the mechanisms of the packed parse forest by accurately choosing search paths. We propose (i) deterministic shift-reduce parsing for unification-based grammars, and (ii) best-first shift-reduce parsing with beam thresholding for unification-based grammars. Deterministic parsing cannot simply be applied to unification-based grammar parsing, which often fails because of its hard constraints. Therefore, it is developed by using default unification, which almost always succeeds in unification by overwriting inconsistent constraints in grammars.
false
[]
[]
null
null
null
null
2009
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
kiesel-etal-2021-image
https://aclanthology.org/2021.argmining-1.4.pdf
Image Retrieval for Arguments Using Stance-Aware Query Expansion
Many forms of argumentation employ images as persuasive means, but research in argument mining has been focused on verbal argumentation so far. This paper shows how to integrate images into argument mining research, specifically into argument retrieval. By exploiting the sophisticated image representations of keyword-based image search, we propose to use semantic query expansion for both the pro and the con stance to retrieve "argumentative images" for the respective stance. Our results indicate that even simple expansions provide a strong baseline, reaching a precision@10 of 0.49 for images being (1) on-topic, (2) argumentative, and (3) on-stance. An in-depth analysis reveals a high topic dependence of the retrieval performance and shows the need to further investigate on images providing contextual information.
false
[]
[]
null
null
null
null
2021
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
santini-etal-2006-implementing
https://aclanthology.org/P06-2090.pdf
Implementing a Characterization of Genre for Automatic Genre Identification of Web Pages
In this paper, we propose an implementable characterization of genre suitable for automatic genre identification of web pages. This characterization is implemented as an inferential model based on a modified version of Bayes' theorem. Such a model can deal with genre hybridism and individualization, two important forces behind genre evolution. Results show that this approach is effective and is worth further research.
false
[]
[]
null
null
null
null
2006
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
dorr-etal-2002-duster
https://link.springer.com/chapter/10.1007/3-540-45820-4_4.pdf
DUSTer: a method for unraveling cross-language divergences for statistical word-level alignment
null
false
[]
[]
null
null
null
null
2002
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false