ID
stringlengths
11
54
url
stringlengths
33
64
title
stringlengths
11
184
abstract
stringlengths
17
3.87k
label_nlp4sg
bool
2 classes
task
sequence
method
sequence
goal1
stringclasses
9 values
goal2
stringclasses
9 values
goal3
stringclasses
1 value
acknowledgments
stringlengths
28
1.28k
year
stringlengths
4
4
sdg1
bool
1 class
sdg2
bool
1 class
sdg3
bool
2 classes
sdg4
bool
2 classes
sdg5
bool
2 classes
sdg6
bool
1 class
sdg7
bool
1 class
sdg8
bool
2 classes
sdg9
bool
2 classes
sdg10
bool
2 classes
sdg11
bool
2 classes
sdg12
bool
1 class
sdg13
bool
2 classes
sdg14
bool
1 class
sdg15
bool
1 class
sdg16
bool
2 classes
sdg17
bool
2 classes
cases-etal-2019-recursive
https://aclanthology.org/N19-1365
Recursive Routing Networks: Learning to Compose Modules for Language Understanding
We introduce Recursive Routing Networks (RRNs), which are modular, adaptable models that learn effectively in diverse environments. RRNs consist of a set of functions, typically organized into a grid, and a meta-learner decision-making component called the router. The model jointly optimizes the parameters of the functions and the meta-learner's policy for routing inputs through those functions. RRNs can be incorporated into existing architectures in a number of ways; we explore adding them to word representation layers, recurrent network hidden layers, and classifier layers. Our evaluation task is natural language inference (NLI). Using the MULTINLI corpus, we show that an RRN's routing decisions reflect the high-level genre structure of that corpus. To show that RRNs can learn to specialize to more fine-grained semantic distinctions, we introduce a new corpus of NLI examples involving implicative predicates, and show that the model components become fine-tuned to the inferential signatures that are characteristic of these predicates. x Routing across examples Weight sharing Possible distribution Orthogonalized Knowledge
false
[]
[]
null
null
null
We thank George Supaniratisai, Arun Chaganty, Kenny Xu and Abi See for valuable discussions, and the anonymous reviewers for their useful suggestions. Clemens Rosenbaum was a recipient of an IBM PhD Fellowship while working on this publication. We acknowledge the Office of the Vice Provost for Undergraduate Education at Stanford for the summer internships for Atticus Geiger, Olivia Li and Sandhini Agarwal. This research is based in part upon work supported by the Stanford Data Science Initiative, by the NSF under Grant No. BCS-1456077, by the NSF Award IIS-1514268, and by the Air Force Research Laboratory and DARPA under agreement number FA8750-18-2-0126. The U.S. Government is authorized to reproduce and distribute reprints for Governmental purposes notwithstanding any copyright notation thereon. The views and conclusions contained herein are those of the authors and should not be interpreted as necessarily representing the official policies or endorsements, either expressed or implied, of the Air Force Research Laboratory and DARPA or the U.S. Government.
2019
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
rapaport-shapiro-1984-quasi
https://aclanthology.org/P84-1016
Quasi-Indexical Reference in Propositional Semantic Networks
We discuss how a deductive question-answering system can represent the beliefs or other cognitive states of users, of other (interacting) systems, and of itself.
false
[]
[]
null
null
null
null
1984
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
ryu-1996-argument
https://aclanthology.org/Y96-1034
Argument Structure and Unaccusativity in the Constraint-based Lexicon
This paper addresses the issue of Split Intransitivity (si) and Unaccusative Mismatches (uMs), proposing a constraint-based approach to si and ums within a recent framework of Head-driven Phrase Structure Grammar. I argue against the widely accepted dichotomous distinction of intransitive verbs, which has been advanced by the Unaccusative Hypothesis [Perlmutter (1978)]. I then propose a quadripartitive distinction of intransitive verbs on the basis of the distribution of subject argument in the semantically motivated argument structure, and show that this quadripartitive distinction allows a better understanding of si and ums. The main idea of this proposal will be summarized as the Quadripartitive Split Intransitivity Hypothesis (Qsm).
false
[]
[]
null
null
null
null
1996
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
liu-1995-preferred
https://aclanthology.org/Y95-1029
Preferred Clause Structure in Mandarin Spoken and Written Discourse
This paper studies the preferred clause structure in Mandarin. Tao's [I] pioneering work proposed the following "preferred clause structure in conversational Mandarin":
false
[]
[]
null
null
null
null
1995
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
windhouwer-2012-relcat
http://www.lrec-conf.org/proceedings/lrec2012/pdf/954_Paper.pdf
RELcat: a Relation Registry for ISOcat data categories
The ISOcat Data Category Registry contains basically a flat and easily extensible list of data category specifications. To foster reuse and standardization only very shallow relationships among data categories are stored in the registry. However, to assist crosswalks, possibly based on personal views, between various (application) domains and to overcome possible proliferation of data categories more types of ontological relationships need to be specified. RELcat is a first prototype of a Relation Registry, which allows storing arbitrary relationships. These relationships can reflect the personal view of one linguist or a larger community. The basis of the registry is a relation type taxonomy that can easily be extended. This allows on one hand to load existing sets o f relations specified in, for example, an OWL (2) ontology or SKOS taxonomy. And on the other hand allows algorithms that query the registry to traverse the stored semantic network to remain ignorant of the original source vocabulary. This paper describes first experiences with RELcat and explains some initial design decisions.
false
[]
[]
null
null
null
Thanks to early adaptors Matej Durco (SMC4LRT), Irina Nevskaya (RELISH) and Ineke Schuurman (CLARIN-NL/VL) for driving this first version of RELcat forward.
2012
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
strobelt-etal-2021-lmdiff
https://aclanthology.org/2021.emnlp-demo.12
LMdiff: A Visual Diff Tool to Compare Language Models
While different language models are ubiquitous in NLP, it is hard to contrast their outputs and identify which contexts one can handle better than the other. To address this question, we introduce LMDIFF, a tool that visually compares probability distributions of two models that differ, e.g., through finetuning, distillation, or simply training with different parameter sizes. LMDIFF allows the generation of hypotheses about model behavior by investigating text instances token by token and further assists in choosing these interesting text instances by identifying the most interesting phrases from large corpora. We showcase the applicability of LMDIFF for hypothesis generation across multiple case studies. A demo is available at http://lmdiff.net.
false
[]
[]
null
null
null
We thank Ankur Parikh and Ian Tenney for helpful comments on an earlier draft of this paper. This work was supported by the MIT-IBM Watson AI Lab. This work has been developed in part during the BigScience Summer of Language Models 2021.
2021
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
taghipour-ng-2016-neural
https://aclanthology.org/D16-1193
A Neural Approach to Automated Essay Scoring
Traditional automated essay scoring systems rely on carefully designed features to evaluate and score essays. The performance of such systems is tightly bound to the quality of the underlying features. However, it is laborious to manually design the most informative features for such a system. In this paper, we develop an approach based on recurrent neural networks to learn the relation between an essay and its assigned score, without any feature engineering. We explore several neural network models for the task of automated essay scoring and perform some analysis to get some insights of the models. The results show that our best system, which is based on long short-term memory networks, outperforms a strong baseline by 5.6% in terms of quadratic weighted Kappa, without requiring any feature engineering.
true
[]
[]
Quality Education
null
null
This research is supported by Singapore Ministry of Education Academic Research Fund Tier 2 grant MOE2013-T2-1-150. We are also grateful to the anonymous reviewers for their helpful comments.
2016
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
papageorgiou-etal-2000-unified
http://www.lrec-conf.org/proceedings/lrec2000/pdf/181.pdf
A Unified POS Tagging Architecture and its Application to Greek
This paper proposes a flexible and unified tagging architecture that could be incorporated into a number of applications like information extraction, cross-language information retrieval, term extraction, or summarization, while providing an essential component for subsequent syntactic processing or lexicographical work. A feature-based multi-tiered approach (FBT tagger) is introduced to part-of-speech tagging. FBT is a variant of the well-known transformation based learning paradigm aiming at improving the quality of tagging highly inflective languages such as Greek. Additionally, a large experiment concerning the Greek language is conducted and results are presented for a variety of text genres, including financial reports, newswires, press releases and technical manuals. Finally, the adopted evaluation methodology is discussed.
false
[]
[]
null
null
null
null
2000
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
reiter-etal-2008-resource
https://aclanthology.org/W08-2231
A Resource-Poor Approach for Linking Ontology Classes to Wikipedia Articles
The applicability of ontologies for natural language processing depends on the ability to link ontological concepts and relations to their realisations in texts. We present a general, resource-poor account to create such a linking automatically by extracting Wikipedia articles corresponding to ontology classes. We evaluate our approach in an experiment with the Music Ontology. We consider linking as a promising starting point for subsequent steps of information extraction.
false
[]
[]
null
null
null
Acknowledgements. We kindly thank our annotators for their effort and Rüdiger Wolf for technical support.
2008
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
saleh-etal-2014-study
https://aclanthology.org/C14-1020
A Study of using Syntactic and Semantic Structures for Concept Segmentation and Labeling
This paper presents an empirical study on using syntactic and semantic information for Concept Segmentation and Labeling (CSL), a well-known component in spoken language understanding. Our approach is based on reranking N-best outputs from a state-of-the-art CSL parser. We perform extensive experimentation by comparing different tree-based kernels with a variety of representations of the available linguistic information, including semantic concepts, words, POS tags, shallow and full syntax, and discourse trees. The results show that the structured representation with the semantic concepts yields significant improvement over the base CSL parser, much larger compared to learning with an explicit feature vector representation. We also show that shallow syntax helps improve the results and that discourse relations can be partially beneficial.
false
[]
[]
null
null
null
This research is developed by the Arabic Language Technologies (ALT) group at Qatar Computing Research Institute (QCRI) within the Qatar Foundation in collaboration with MIT. It is part of the Interactive sYstems for Answer Search (Iyas) project.
2014
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
malmasi-etal-2015-norwegian
https://aclanthology.org/R15-1053
Norwegian Native Language Identification
We present a study of Native Language Identification (NLI) using data from learners of Norwegian, a language not yet used for this task. NLI is the task of predicting a writer's first language using only their writings in a learned language. We find that three feature types, function words, part-of-speech n-grams and a hybrid part-of-speech/function word mixture n-gram model are useful here. Our system achieves an accuracy of 79% against a baseline of 13% for predicting an author's L1. The same features can distinguish non-native writing with 99% accuracy. We also find that part-of-speech n-gram performance on this data deviates from previous NLI results, possibly due to the use of manually post-corrected tags.
false
[]
[]
null
null
null
We would like to thank Kari Tenfjord and Paul Meurer for providing access to the ASK corpus and their assistance in using the data.
2015
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
siblini-etal-2021-towards
https://aclanthology.org/2021.acl-short.130
Towards a more Robust Evaluation for Conversational Question Answering
With the explosion of chatbot applications, Conversational Question Answering (CQA) has generated a lot of interest in recent years. Among proposals, reading comprehension models which take advantage of the conversation history (previous QA) seem to answer better than those which only consider the current question. Nevertheless, we note that the CQA evaluation protocol has a major limitation. In particular, models are allowed, at each turn of the conversation, to access the ground truth answers of the previous turns. Not only does this severely prevent their applications in fully autonomous chatbots, it also leads to unsuspected biases in their behavior. In this paper, we highlight this effect and propose new tools for evaluation and training in order to guard against the noted issues. The new results that we bring come to reinforce methods of the current state of the art.
false
[]
[]
null
null
null
null
2021
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
dudy-etal-2018-multi
https://aclanthology.org/W18-1210
A Multi-Context Character Prediction Model for a Brain-Computer Interface
Brain-computer interfaces and other augmentative and alternative communication devices introduce language-modeing challenges distinct from other character-entry methods. In particular, the acquired signal of the EEG (electroencephalogram) signal is noisier, which, in turn, makes the user intent harder to decipher. In order to adapt to this condition, we propose to maintain ambiguous history for every time step, and to employ, apart from the character language model, word information to produce a more robust prediction system. We present preliminary results that compare this proposed Online-Context Language Model (OCLM) to current algorithms that are used in this type of setting. Evaluations on both perplexity and predictive accuracy demonstrate promising results when dealing with ambiguous histories in order to provide to the front end a distribution of the next character the user might type.
false
[]
[]
null
null
null
We would like to thank the reviewers of the SCLeM workshop for their insightful comments and feedback. We also would like to thank Brian Roark for his helpful advice, as well as our clinical team in the Institute on Development & Disability at OHSU. Research reported in this paper was supported the National Institute on Deafness and Other Communication Disorders of the NIH under award number 5R01DC009834-09. The content is solely the responsibility of the authors and does not necessarily represent the official views of the National Institutes of Health.
2018
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
danieli-etal-2004-evaluation
http://www.lrec-conf.org/proceedings/lrec2004/pdf/371.pdf
Evaluation of Consensus on the Annotation of Prosodic Breaks in the Romance Corpus of Spontaneous Speech ``C-ORAL-ROM''
CORAL -ROM, Integrated Reference Corpora For Spoken Romance Languages, is a multilingual corpus of spontaneous speech delivered within the IST Program. Corpora are tagged with respect to terminal and non terminal prosodic breaks. Terminal breaks are considered the most perceptively relevant cues to determine the utterance boundaries in spontaneous speech resources. The paper presents the evaluation of the inter-annotator agreement accomplished by an institution external to the consortium and shows the level of reliability of the tagging delivered and the annotation scheme adopted. The data show, at cross-linguistic level, a very high K coefficient (between 7.7 and 9.2, according to the language resource). A strong level of agreement specifically for terminal breaks has also been recorded. The data thus show that the annotation of the utterances identified in terms of their prosodic breaks is able to capture relevant perceptual facts, and it appears that the proposed coding scheme can be applied in a highly replicable way.
false
[]
[]
null
null
null
null
2004
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
quirk-etal-2015-language
https://aclanthology.org/P15-1085
Language to Code: Learning Semantic Parsers for If-This-Then-That Recipes
Using natural language to write programs is a touchstone problem for computational linguistics. We present an approach that learns to map natural-language descriptions of simple "if-then" rules to executable code. By training and testing on a large corpus of naturally-occurring programs (called "recipes") and their natural language descriptions, we demonstrate the ability to effectively map language to code. We compare a number of semantic parsing approaches on the highly noisy training data collected from ordinary users, and find that loosely synchronous systems perform best.
false
[]
[]
null
null
null
The authors would like to thank William Dolan and the anonymous reviewers for their helpful advice and suggestions.
2015
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
sakaji-etal-2019-financial
https://aclanthology.org/W19-5507
Financial Text Data Analytics Framework for Business Confidence Indices and Inter-Industry Relations
In this paper, we propose a novel framework for analyzing inter-industry relations using the contact histories of local banks. Contact histories are data recorded when employees communicate with customers. By analyzing contact histories, we can determine business confidence levels in the local region and analyze inter-industry relations using industrial data that is attached to the contact history. However, it is often difficult for bankers to create analysis programs. Therefore, we propose a banker-friendly inter-industry relations analysis framework. In this study, we generated regional business confidence indices and used them to analyze inter-industry relations.
false
[]
[]
null
null
null
null
2019
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
wu-etal-2021-newsbert-distilling
https://aclanthology.org/2021.findings-emnlp.280
NewsBERT: Distilling Pre-trained Language Model for Intelligent News Application
Pre-trained language models (PLMs) like BERT have made great progress in NLP. News articles usually contain rich textual information, and PLMs have the potentials to enhance news text modeling for various intelligent news applications like news recommendation and retrieval. However, most existing PLMs are in huge size with hundreds of millions of parameters. Many online news applications need to serve millions of users with low latency, which poses great challenge to incorporating PLMs in these scenarios. Knowledge distillation techniques can compress a large PLM into a much smaller one and meanwhile keep good performance. However, existing language models are pre-trained and distilled on general corpus like Wikipedia, which have gaps with the news domain and may be suboptimal for news intelligence. In this paper, we propose NewsBERT, which can distill PLMs for efficient and effective news intelligence. In our approach, we design a teacher-student joint learning and distillation framework to collaboratively learn both teacher and student models, where the student model can learn from the learning experience of the teacher model. In addition, we propose a momentum distillation method by incorporating the gradients of teacher model into the update of student model to better transfer the knowledge learned by the teacher model. Thorough experiments on two real-world datasets with three tasks show that NewsBERT can empower various intelligent news applications with much smaller models.
true
[]
[]
Peace, Justice and Strong Institutions
null
null
This work was supported by the National Natural Science Foundation of China under Grant numbers 82090053, 61862002, and Tsinghua-Toyota Research Funds 20213930033.
2021
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
true
false
bengler-2000-automotive
http://www.lrec-conf.org/proceedings/lrec2000/pdf/312.pdf
Automotive Speech-Recognition - Success Conditions Beyond Recognition Rates
From a car-manufacturer's point of view it is very important to integrate evaluation procedures into the MMI development process. Focusing the usability evaluation of speech-input and speech-output systems aspects beyond recognition rates must be fulfilled. Two of these conditions will be discussed based upon user studies conducted in 1999: • Mental-workload and distraction • Learnability
false
[]
[]
null
null
null
null
2000
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
lin-2002-web
http://www.lrec-conf.org/proceedings/lrec2002/pdf/85.pdf
The Web as a Resource for Question Answering: Perspectives and Challenges
The vast amounts of information readily available on the World Wide Web can be effectively used for question answering in two fundamentally different ways. In the federated approach, techniques for handling semistructured data are applied to access Web sources as if they were databases, allowing large classes of common questions to be answered uniformly. In the distributed approach, largescale text-processing techniques are used to extract answers directly from unstructured Web documents. Because the Web is orders of magnitude larger than any human-collected corpus, question answering systems can capitalize on its unparalleled-levels of data redundancy. Analysis of real-world user questions reveals that the federated and distributed approaches complement each other nicely, suggesting a hybrid approach in future question answering systems.
false
[]
[]
null
null
null
I'd like to thank Boris Katz, Greg Marton, and Vineet Sinha for their helpful comments on earlier drafts.
2002
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
schulte-im-walde-2006-experiments
https://aclanthology.org/J06-2001
Experiments on the Automatic Induction of German Semantic Verb Classes
This article presents clustering experiments on German verbs: A statistical grammar model for German serves as the source for a distributional verb description at the lexical syntax-semantics interface, and the unsupervised clustering algorithm k-means uses the empirical verb properties to perform an automatic induction of verb classes. Various evaluation measures are applied to compare the clustering results to gold standard German semantic verb classes under different criteria. The primary goals of the experiments are (1) to empirically utilize and investigate the well-established relationship between verb meaning and verb behavior within a cluster analysis and (2) to investigate the required technical parameters of a cluster analysis with respect to this specific linguistic task. The clustering methodology is developed on a small-scale verb set and then applied to a larger-scale verb set including 883 German verbs.
false
[]
[]
null
null
null
The work reported here was performed while the author was a member of the DFG-funded PhD program "Graduiertenkolleg" Sprachliche Repräsentationen und ihre Interpretation at the Institute for Natural Language Processing (IMS), University of Stuttgart, Germany. Many thanks to Helmut Schmid, Stefan Evert, Frank Keller, Scott McDonald, Alissa Melinger, Chris Brew, Hinrich Schütze, Jonas Kuhn, and the two anonymous reviewers for their valuable comments on previous versions of this article.
2006
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
turchi-etal-2014-adaptive
https://aclanthology.org/P14-1067
Adaptive Quality Estimation for Machine Translation
The automatic estimation of machine translation (MT) output quality is a hard task in which the selection of the appropriate algorithm and the most predictive features over reasonably sized training sets plays a crucial role. When moving from controlled lab evaluations to real-life scenarios the task becomes even harder. For current MT quality estimation (QE) systems, additional complexity comes from the difficulty to model user and domain changes. Indeed, the instability of the systems with respect to data coming from different distributions calls for adaptive solutions that react to new operating conditions. To tackle this issue we propose an online framework for adaptive QE that targets reactivity and robustness to user and domain changes. Contrastive experiments in different testing conditions involving user and domain changes demonstrate the effectiveness of our approach.
false
[]
[]
null
null
null
This work has been partially supported by the ECfunded project MateCat (ICT-2011.4.2-287688).
2014
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
kongthon-etal-2011-semantic
https://aclanthology.org/W11-3106
A Semantic Based Question Answering System for Thailand Tourism Information
This paper reports our ongoing research work to create a semantic based question answering system for Thailand tourism information. Our proposed system focuses on mapping expressions in Thai natural language into ontology query language (SPARQL).
false
[]
[]
null
null
null
null
2011
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
schatzmann-etal-2007-agenda
https://aclanthology.org/N07-2038
Agenda-Based User Simulation for Bootstrapping a POMDP Dialogue System
This paper investigates the problem of bootstrapping a statistical dialogue manager without access to training data and proposes a new probabilistic agenda-based method for simulating user behaviour. In experiments with a statistical POMDP dialogue system, the simulator was realistic enough to successfully test the prototype system and train a dialogue policy. An extensive study with human subjects showed that the learned policy was highly competitive, with task completion rates above 90%.
false
[]
[]
null
null
null
null
2007
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
hazem-hernandez-2019-meta
https://aclanthology.org/R19-1055
Meta-Embedding Sentence Representation for Textual Similarity
Word embedding models are now widely used in most NLP applications. Despite their effectiveness, there is no clear evidence about the choice of the most appropriate model. It often depends on the nature of the task and on the quality and size of the used data sets. This remains true for bottom-up sentence embedding models. However, no straightforward investigation has been conducted so far. In this paper, we propose a systematic study of the impact of the main word embedding models on sentence representation. By contrasting in-domain and pre-trained embedding models, we show under which conditions they can be jointly used for bottom-up sentence embeddings. Finally, we propose the first bottom-up meta-embedding representation at the sentence level for textual similarity. Significant improvements are observed in several tasks including question-to-question similarity, paraphrasing and next utterance ranking.
false
[]
[]
null
null
null
null
2019
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
carlberger-etal-2001-improving
https://aclanthology.org/W01-1703
Improving Precision in Information Retrieval for Swedish using Stemming
We will in this paper present an evaluation 1 of how much stemming improves precision in information retrieval for Swedish texts. To perform this, we built an information retrieval tool with optional stemming and created a tagged corpus in Swedish. We know that stemming in information retrieval for English, Dutch and Slovenian gives better precision the more inflecting the language is, but precision depends also on query length and document length. Our final results were that stemming improved both precision and recall with 15 respectively 18 percent for Swedish texts having an average length of 181 words.
false
[]
[]
null
null
null
We would like to thank the search engine team and specifically Jesper Ekhall at Euroseek AB for their support with the integration of our stemming algorithms in their search engine and allowing us to use their search engine in our experiments.
2001
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
dredze-crammer-2008-active
https://aclanthology.org/P08-2059
Active Learning with Confidence
Active learning is a machine learning approach to achieving high-accuracy with a small amount of labels by letting the learning algorithm choose instances to be labeled. Most of previous approaches based on discriminative learning use the margin for choosing instances. We present a method for incorporating confidence into the margin by using a newly introduced online learning algorithm and show empirically that confidence improves active learning.
false
[]
[]
null
null
null
null
2008
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
ji-etal-2020-span
https://aclanthology.org/2020.coling-main.8
Span-based Joint Entity and Relation Extraction with Attention-based Span-specific and Contextual Semantic Representations
Span-based joint extraction models have shown their efficiency on entity recognition and relation extraction. These models regard text spans as candidate entities and span tuples as candidate relation tuples. Span semantic representations are shared in both entity recognition and relation extraction, while existing models cannot well capture semantics of these candidate entities and relations. To address these problems, we introduce a span-based joint extraction framework with attention-based semantic representations. Specially, attentions are utilized to calculate semantic representations, including span-specific and contextual ones. We further investigate effects of four attention variants in generating contextual semantic representations. Experiments show that our model outperforms previous systems and achieves state-of-the-art results on ACE2005, CoNLL2004 and ADE.
false
[]
[]
null
null
null
The work is supported by the National Key Research and Development Program of China (2018YFB1004502) and the National Natural Science Foundation of China (61532001).
2020
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
poelitz-bartz-2014-enhancing
https://aclanthology.org/W14-0606
Enhancing the possibilities of corpus-based investigations: Word sense disambiguation on query results of large text corpora
Common large digital text corpora do not distinguish between different meanings of word forms, intense manual effort has to be done for disambiguation tasks when querying for homonyms or polysemes. To improve this situation, we ran experiments with automatic word sense disambiguation methods operating directly on the output of the corpus query. In this paper, we present experiments with topic models to cluster search result snippets in order to separate occurrences of homonymous or polysemous queried words by their meanings.
false
[]
[]
null
null
null
null
2014
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
rondeau-hazen-2018-systematic
https://aclanthology.org/W18-2602
Systematic Error Analysis of the Stanford Question Answering Dataset
We analyzed the outputs of multiple question answering (QA) models applied to the Stanford Question Answering Dataset (SQuAD) to identify the core challenges for QA systems on this data set. Through an iterative process, challenging aspects were hypothesized through qualitative analysis of the common error cases. A classifier was then constructed to predict whether SQuAD test examples were likely to be difficult for systems to answer based on features associated with the hypothesized aspects. The classifier's performance was used to accept or reject each aspect as an indicator of difficulty. With this approach, we ensured that our hypotheses were systematically tested and not simply accepted based on our pre-existing biases. Our explanations are not accepted based on human evaluation of individual examples. This process also enabled us to identify the primary QA strategy learned by the models, i.e., systems determined the acceptable answer type for a question and then selected the acceptable answer span of that type containing the highest density of words present in the question within its local vicinity in the passage.
false
[]
[]
null
null
null
We would like to thank Eric Lin, Peter Potash, Yadollah Yaghoobzadeh, and Kaheer Suleman for their feedback and helpful comments. We also thanks the anonymous reviewers for their comments.
2018
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
wang-etal-2021-fine-grained
https://aclanthology.org/2021.findings-emnlp.9
Fine-grained Semantic Alignment Network for Weakly Supervised Temporal Language Grounding
Temporal language grounding (TLG) aims to localize a video segment in an untrimmed video based on a natural language description. To alleviate the expensive cost of manual annotations for temporal boundary labels, we are dedicated to the weakly supervised setting, where only video-level descriptions are provided for training. Most of the existing weakly supervised methods generate a candidate segment set and learn cross-modal alignment through a MIL-based framework. However, the temporal structure of the video as well as the complicated semantics in the sentence are lost during the learning. In this work, we propose a novel candidatefree framework: Fine-grained Semantic Alignment Network (FSAN), for weakly supervised TLG. Instead of view the sentence and candidate moments as a whole, FSAN learns token-by-clip cross-modal semantic alignment by an iterative cross-modal interaction module, generates a fine-grained cross-modal semantic alignment map, and performs grounding directly on top of the map. Extensive experiments are conducted on two widelyused benchmarks: ActivityNet-Captions, and DiDeMo, where our FSAN achieves state-ofthe-art performance.
false
[]
[]
null
null
null
This work was supported by the National Natural Science Foundation of China under Contract 61632019.
2021
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
zeng-etal-2021-gene
https://aclanthology.org/2021.textgraphs-1.5
GENE: Global Event Network Embedding
Current methods for event representation ignore related events in a corpus-level global context. For a deep and comprehensive understanding of complex events, we introduce a new task, Event Network Embedding, which aims to represent events by capturing the connections among events. We propose a novel framework, Global Event Network Embedding (GENE), that encodes the event network with a multi-view graph encoder while preserving the graph topology and node semantics. The graph encoder is trained by minimizing both structural and semantic losses. We develop a new series of structured probing tasks, and show that our approach effectively outperforms baseline models on node typing, argument role classification, and event coreference resolution. 1
false
[]
[]
null
null
null
This research is based upon work supported in part by U.S. DARPA KAIROS Program No. FA8750-19-2-1004, U.S. DARPA AIDA Program No. FA8750-18-2-0014, Air Force No. FA8650-17-C-7715. The views and conclusions contained herein are those of the authors and should not be interpreted as necessarily representing the official policies, either expressed or implied, of DARPA, or the U.S. Government. The U.S. Government is authorized to reproduce and distribute reprints for governmental purposes notwithstanding any copyright annotation therein.
2021
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
bagherbeygi-shamsfard-2012-corpus
http://www.lrec-conf.org/proceedings/lrec2012/pdf/1013_Paper.pdf
Corpus based Semi-Automatic Extraction of Persian Compound Verbs and their Relations
Nowadays, Wordnet is used in natural language processing as one of the major linguistic resources. Having such a resource for Persian language helps researchers in computational linguistics and natural language processing fields to develop more accurate systems with higher performances. In this research, we propose a model for semi-automatic construction of Persian wordnet of verbs. Compound verbs are a very productive structure in Persian and number of compound verbs is much greater than simple verbs in this language This research is aimed at finding the structure of Persian compound verbs and the relations between verb components. The main idea behind developing this system is using the wordnet of other POS categories (here means noun and adjective) to extract Persian compound verbs, their synsets and their relations. This paper focuses on three main tasks: 1.extracting compound verbs 2.extracting verbal synsets and 3.extracting the relations among verbal synsets such as hypernymy, antonymy and cause.
false
[]
[]
null
null
null
null
2012
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
geertzen-etal-2007-multidimensional
https://aclanthology.org/2007.sigdial-1.26
A Multidimensional Approach to Utterance Segmentation and Dialogue Act Classification
In this paper we present a multidimensional approach to utterance segmentation and automatic dialogue act classif cation. We show that the use of multiple dimensions in distinguishing and annotating units not only supports a more accurate analysis of human communication, but can also help to solve some notorious problems concerning the segmentation of dialogue into functional units. We introduce the use of per-dimension segmentation for dialogue act taxonomies that feature multi-functionality and show that better classif cation results are obtained when using a separate segmentation for each dimension than when using one segmentation that f ts all dimensions. Three machine learning techniques are applied and compared on the task of automatic classifcation of multiple communicative functions of utterances. The results are encouraging and indicate that communicative functions in important dimensions are easy machinelearnable.
false
[]
[]
null
null
null
null
2007
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
zanzotto-etal-2006-discovering
https://aclanthology.org/P06-1107
Discovering Asymmetric Entailment Relations between Verbs Using Selectional Preferences
In this paper we investigate a novel method to detect asymmetric entailment relations between verbs. Our starting point is the idea that some point-wise verb selectional preferences carry relevant semantic information. Experiments using Word-Net as a gold standard show promising results. Where applicable, our method, used in combination with other approaches, significantly increases the performance of entailment detection. A combined approach including our model improves the AROC of 5% absolute points with respect to standard models.
false
[]
[]
null
null
null
null
2006
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
guthrie-etal-2008-unsupervised
http://www.lrec-conf.org/proceedings/lrec2008/pdf/866_paper.pdf
An Unsupervised Probabilistic Approach for the Detection of Outliers in Corpora
Many applications of computational linguistics are greatly influenced by the quality of corpora available and as automatically generated corpora continue to play an increasingly common role, it is essential that we not overlook the importance of well-constructed and homogeneous corpora. This paper describes an automatic approach to improving the homogeneity of corpora using an unsupervised method of statistical outlier detection to find documents and segments that do not belong in a corpus. We consider collections of corpora that are homogeneous with respect to topic (i.e. about the same subject), or genre (written for the same audience or from the same source) and use a combination of stylistic and lexical features of the texts to automatically identify pieces of text in these collections that break the homogeneity. These pieces of text that are significantly different from the rest of the corpus are likely to be errors that are out of place and should be removed from the corpus before it is used for other tasks. We evaluate our techniques by running extensive experiments over large artificially constructed corpora that each contain single pieces of text from a different topic, author, or genre than the rest of the collection and measure the accuracy of identifying these pieces of text without the use of training data. We show that when these pieces of text are reasonably large (1,000 words) we can reliably identify them in a corpus.
false
[]
[]
null
null
null
null
2008
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
al-natsheh-etal-2017-udl
https://aclanthology.org/S17-2013
UdL at SemEval-2017 Task 1: Semantic Textual Similarity Estimation of English Sentence Pairs Using Regression Model over Pairwise Features
This paper describes the model UdL we proposed to solve the semantic textual similarity task of SemEval 2017 workshop. The track we participated in was estimating the semantics relatedness of a given set of sentence pairs in English. The best run out of three submitted runs of our model achieved a Pearson correlation score of 0.8004 compared to a hidden human annotation of 250 pairs. We used random forest ensemble learning to map an expandable set of extracted pairwise features into a semantic similarity estimated value bounded between 0 and 5. Most of these features were calculated using word embedding vectors similarity to align Part of Speech (PoS) and Name Entities (NE) tagged tokens of each sentence pair. Among other pairwise features, we experimented a classical tf-idf weighted Bag of Words (BoW) vector model but with character-based range of n-grams instead of words. This sentence vector BoW-based feature gave a relatively high importance value percentage in the feature importances analysis of the ensemble learning.
false
[]
[]
null
null
null
We would like to thank ARC6 Auvergne-Rhône-Alpes that funds the current PhD studies of the first author and the program "Investissements d'Avenir" ISTEX for funding the post-doctoral position of the second author.
2017
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
braschler-etal-2000-evaluation
http://www.lrec-conf.org/proceedings/lrec2000/pdf/70.pdf
The Evaluation of Systems for Cross-language Information Retrieval
We describe the creation of an infrastructure for the testing of cross-language text retrieval systems within the context of the Text REtrieval Conferences (TREC) organised by the US National Institute of Standards and Technology (NIST). The approach adopted and the issues that had to be taken into consideration when building a multilingual test suite and developing appropriate evaluation procedures to test cross-language systems are described. From 2000 on, a cross-language evaluation activity for European languages known as CLEF (Cross-Language Evaluation Forum) will be coordinated in Europe, while TREC will focus on Asian languages. The implications of the move to Europe and the intentions for the future are discussed.
false
[]
[]
null
null
null
We gratefully acknowledge the support of all the data providers and copyright holders, and in particular: Newswires: Associated Press, USA; SDA -Schweizerische Depeschenagentur, Switzerland.
2000
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
oneill-mctear-1999-object
https://aclanthology.org/E99-1004
An Object-Oriented Approach to the Design of Dialogue Management Functionality
Dialogues may be seen as comprising commonplace routines on the one hand and specialized, task-specific interactions on the other. Object-orientation is an established means of separating the generic from the specialized. The system under discussion combines this objectoriented approach with a self-organizing, mixed-initiative dialogue strategy, raising the possibility of dialogue systems that can be assembled from ready-made components and tailored, specialized components.
false
[]
[]
null
null
null
null
1999
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
roemmele-etal-2021-answerquest
https://aclanthology.org/2021.eacl-demos.6
AnswerQuest: A System for Generating Question-Answer Items from Multi-Paragraph Documents
One strategy for facilitating reading comprehension is to present information in a questionand-answer format. We demo a system that integrates the tasks of question answering (QA) and question generation (QG) in order to produce Q&A items that convey the content of multi-paragraph documents. We report some experiments for QA and QG that yield improvements on both tasks, and assess how they interact to produce a list of Q&A items for a text. The demo is accessible at qna.sdl.com.
false
[]
[]
null
null
null
null
2021
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
ballard-tinkham-1984-phrase
https://aclanthology.org/J84-2001
A Phrase-Structured Grammatical Framework for Transportable Natural Language Processing
We present methods of dealing with the syntactic problems that arise in the construction of natural language processors that seek to allow users, as opposed to computational linguists, to customize an interface to operate with a new domain of data. In particular, we describe a grammatical formalism, based on augmented phrase-structure rules, which allows a parser to perform many important domain-specific disambiguations by reference to a pre-defined grammar and a collection of auxiliary files produced during an initial knowledge acquisition session with the user. We illustrate the workings of this formalism with examples from the grammar developed for our Layered Domain Class (LDC) system, though similarly motivated systems ought also to benefit from our formalisms. In addition to showing the theoretical advantage of providing many of the fine-tuning capabilities of so-called semantic grammars within the context of a domain-independent grammar, we demonstrate several practical benefits to our approach. The results of three experiments with our grammar and parser are also given.
false
[]
[]
null
null
null
null
1984
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
martschat-etal-2015-analyzing
https://aclanthology.org/N15-3002
Analyzing and Visualizing Coreference Resolution Errors
We present a toolkit for coreference resolution error analysis. It implements a recently proposed analysis framework and contains rich components for analyzing and visualizing recall and precision errors.
false
[]
[]
null
null
null
This work has been funded by the Klaus Tschira Foundation, Heidelberg, Germany. The first author has been supported by a HITS PhD scholarship.
2015
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
hoffman-etal-1963-application
https://aclanthology.org/1963.earlymt-1.16
Application of decision tables to syntactic analysis
null
false
[]
[]
null
null
null
null
1963
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
beltagy-etal-2019-scibert
https://aclanthology.org/D19-1371
SciBERT: A Pretrained Language Model for Scientific Text
Obtaining large-scale annotated data for NLP tasks in the scientific domain is challenging and expensive. We release SCIBERT, a pretrained language model based on BERT (Devlin et al., 2019) to address the lack of highquality, large-scale labeled scientific data. SCIBERT leverages unsupervised pretraining on a large multi-domain corpus of scientific publications to improve performance on downstream scientific NLP tasks. We evaluate on a suite of tasks including sequence tagging, sentence classification and dependency parsing, with datasets from a variety of scientific domains. We demonstrate statistically significant improvements over BERT and achieve new state-of-the-art results on several of these tasks. The code and pretrained models are available at https://github. com/allenai/scibert/.
true
[]
[]
Industry, Innovation and Infrastructure
null
null
We thank the anonymous reviewers for their comments and suggestions. We also thank Waleed Ammar, Noah Smith, Yoav Goldberg, Daniel King, Doug Downey, and Dan Weld for their helpful discussions and feedback. All experiments were performed on beaker.org and supported in part by credits from Google Cloud.
2019
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
fillwock-traum-2018-identification
https://aclanthology.org/L18-1629
Identification of Personal Information Shared in Chat-Oriented Dialogue
We present an analysis of how personal information is shared in chat-oriented dialogue. We develop an annotation scheme, including entity-types, attributes, and values, that can be used to annotate the presence and type of personal information in these dialogues. A collection of attribute types is identified from the annotation of three chat-oriented dialogue corpora and a taxonomy of personal information pertinent to chat-oriented dialogue is presented. We examine similarities and differences in the frequency of specific attributes in the three corpora and observe that there is much overlap between the attribute types which are shared between dialogue participants in these different settings. The work presented here suggests that there is a common set of attribute types that frequently occur within chat-oriented dialogue in general. This resource can be used in the development of chat-oriented dialogue systems by providing common topics that a dialogue system should be able to talk about.
false
[]
[]
null
null
null
null
2018
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
keesing-etal-2020-convolutional
https://aclanthology.org/2020.alta-1.13
Convolutional and Recurrent Neural Networks for Spoken Emotion Recognition
We test four models proposed in the speech emotion recognition (SER) literature on 15 public and academic licensed datasets in speaker-independent cross-validation. Results indicate differences in the performance of the models which is partly dependent on the dataset and features used. We also show that a standard utterance-level feature set still performs competitively with neural models on some datasets. This work serves as a starting point for future model comparisons, in addition to open-sourcing the testing code.
false
[]
[]
null
null
null
The authors would like to thank the University of Auckland for funding this research through a PhD scholarship. We would like to thank in particular the School of Computer Science for providing the computer hardware to train and test these models. We would also like to thank the anonymous reviewers who submitted helpful feedback on this paper.
2020
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
kunilovskaya-etal-2021-fiction
https://aclanthology.org/2021.ranlp-1.84
Fiction in Russian Translation: A Translationese Study
This paper presents a translationese study based on the parallel data from the Russian National Corpus (RNC). We explored differences between literary texts originally authored in Russian and fiction translated into Russian from 11 languages. The texts are represented with frequency-based features that capture structural and lexical properties of language. Binary classification results indicate that literary translations can be distinguished from non-translations with an accuracy ranging from 82 to 92% depending on the source language and feature set. Multiclass classification confirms that translations from distant languages are more distinct from non-translations than translations from languages that are typologically close to Russian. It also demonstrates that translations from same-family source languages share translationese properties. Structural features return more consistent results than features relying on external resources and capturing lexical properties of texts in both translationese detection and source language identification tasks.
false
[]
[]
null
null
null
null
2021
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
kim-etal-2021-mostly
https://aclanthology.org/2021.naloma-1.9
A (Mostly) Symbolic System for Monotonic Inference with Unscoped Episodic Logical Forms
We implement the formalization of natural logic-like monotonic inference using Unscoped Episodic Logical Forms (ULFs) by Kim et al. (2020). We demonstrate this system's capacity to handle a variety of challenging semantic phenomena using the FraCaS dataset (Cooper et al., 1996).These results give empirical evidence for prior claims that ULF is an appropriate representation to mediate natural logic-like inferences. 1
false
[]
[]
null
null
null
This work was supported by NSF EAGER grant NSF IIS-1908595, DARPA CwC subcontract W911NF-15-1-0542, and a Sproull Graduate Fellowship from the University of Rochester. We are grateful to the anonymous reviewers for their helpful feedback.
2021
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
litman-etal-2006-characterizing
https://aclanthology.org/J06-3004
Characterizing and Predicting Corrections in Spoken Dialogue Systems
This article focuses on the analysis and prediction of corrections, defined as turns where a user tries to correct a prior error made by a spoken dialogue system. We describe our labeling procedure of various corrections types and statistical analyses of their features in a corpus collected from a train information spoken dialogue system. We then present results of machinelearning experiments designed to identify user corrections of speech recognition errors. We investigate the predictive power of features automatically computable from the prosody of the turn, the speech recognition process, experimental conditions, and the dialogue history. Our best-performing features reduce classification error from baselines of 25.70-28.99% to 15.72%.
false
[]
[]
null
null
null
Marc Swerts is also affiliated with the University of Antwerp. His research is sponsored by the Netherlands Organisation for Scientific Research (NWO). This work was performed when the authors were at AT&T Labs-Research.
2006
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
belz-2005-statistical
https://aclanthology.org/W05-1601
Statistical Generation: Three Methods Compared and Evaluated
Statistical NLG has largely meant n-gram modelling which has the considerable advantages of lending robustness to NLG systems, and of making automatic adaptation to new domains from raw corpora possible. On the downside, n-gram models are expensive to use as selection mechanisms and have a built-in bias towards shorter realisations. This paper looks at treebank-training of generators, an alternative method for building statistical models for NLG from raw corpora, and two different ways of using treebank-trained models during generation. Results show that the treebank-trained generators achieve improvements similar to a 2-gram generator over a baseline of random selection. However, the treebank-trained generators achieve this at a much lower cost than the 2-gram generator, and without its strong preference for shorter realisations.
false
[]
[]
null
null
null
The research reported in this paper is part of the CoGenT project, an ongoing research project supported under UK EP-SRC Grant GR/S24480/01. Many thanks to John Carroll, Roger Evans and Richard Power, as well as to the anonymous reviewers, for very helpful comments.
2005
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
bergmair-2009-proposal
https://aclanthology.org/W09-2502
A Proposal on Evaluation Measures for RTE
We outline problems with the interpretation of accuracy in the presence of bias, arguing that the issue is a particularly pressing concern for RTE evaluation. Furthermore, we argue that average precision scores are unsuitable for RTE, and should not be reported. We advocate mutual information as a new evaluation measure that should be reported in addition to accuracy and confidence-weighted score.
false
[]
[]
null
null
null
I would like to thank the anonymous reviewers and my colleague Ekaterina Shutova for providing many helpful comments and my supervisor Ann Copestake for reading multiple drafts of this paper and providing a great number of suggestions within a very short timeframe. All errors and omissions are, of course, entirely my own. I gratefully acknowledge financial support by the Austrian Academy of Sciences.
2009
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
hellwig-etal-2018-multi
https://aclanthology.org/L18-1011
Multi-layer Annotation of the Rigveda
The paper introduces a multi-level annotation of the R. GVEDA, a fundamental Sanskrit text composed in the 2. millenium BCE that is important for South-Asian and Indo-European linguistics, as well as Cultural Studies. We describe the individual annotation levels, including phonetics, morphology, lexicon, and syntax, and show how these different levels of annotation are merged to create a novel annotated corpus of Vedic Sanskrit. Vedic Sanskrit is a complex, but computationally under-resourced language. Therefore, creating this resource required considerable domain adaptation of existing computational tools, which is discussed in this paper. Because parts of the annotations are selective, we propose a bi-directional LSTM based sequential model to supplement missing verb-argument links.
false
[]
[]
null
null
null
Research for this project was partially funded by the Cluster of Excellence "Multimodal Computing and Interaction" of German Science Foundation (DFG). We thank the Akademie der Wissenschaften und der Literatur Mainz for hosting the annotated corpus.
2018
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
ciobanu-etal-2015-readability
https://aclanthology.org/R15-1014
Readability Assessment of Translated Texts
In this paper we investigate how readability varies between texts originally written in English and texts translated into English. For quantification, we analyze several factors that are relevant in assessing readability-shallow, lexical and morpho-syntactic features-and we employ the widely used Flesch-Kincaid formula to measure the variation of the readability level between original English texts and texts translated into English. Finally, we analyze whether the readability features have enough discriminative power to distinguish between originals and translations.
false
[]
[]
null
null
null
We thank the anonymous reviewers for their helpful and constructive comments. The contribution of the authors to this paper is equal. Liviu P. Dinu was supported by UEFISCDI, PNII-ID-PCE-2011-3-0959.
2015
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
miller-etal-2014-employing
https://aclanthology.org/W14-5308
Employing Phonetic Speech Recognition for Language and Dialect Specific Search
We discuss the notion of language and dialect-specific search in the context of audio indexing. A system is described where users can find dialect or language-specific pronunciations of Afghan placenames in Dari and Pashto. We explore the efficacy of a phonetic speech recognition system employed in this task.
false
[]
[]
null
null
null
null
2014
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
pecar-2018-towards
https://aclanthology.org/P18-3001
Towards Opinion Summarization of Customer Reviews
In recent years, the number of texts has grown rapidly. For example, most reviewbased portals, like Yelp or Amazon, contain thousands of user-generated reviews. It is impossible for any human reader to process even the most relevant of these documents. The most promising tool to solve this task is a text summarization. Most existing approaches, however, work on small, homogeneous, English datasets, and do not account to multi-linguality, opinion shift, and domain effects. In this paper, we introduce our research plan to use neural networks on user-generated travel reviews to generate summaries that take into account shifting opinions over time. We outline future directions in summarization to address all of these issues. By resolving the existing problems, we will make it easier for users of review-sites to make more informed decisions.
false
[]
[]
null
null
null
I would like to thank my supervisors Marian Simko and Maria Bielikova. This work has been partially supported by the STU Grant scheme for Support of Young Researchers and grants No. VG 1/0646/15 and No. KEGA 028STU-4/2017.
2018
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
montgomery-1997-fulcrum
https://aclanthology.org/1997.mtsummit-plenaries.5
The Fulcrum Approach to Machine Translation
In a paper from a distinguished collection of papers prepared for a 1959 course entitled "Computer Programming and Artificial Intelligence," Paul Garvin described two types of machine translation problems "in terms of the two components of the term: machine problems, and translation problems." While the machine problems made us crazy, the translation problems made us think differently about language than we might otherwise have done, which has had some advantages and some disadvantages in the long run. I will save anecdotes about the former and comments about the latter for the discussion. In this paper I will focus on the translation problems and, in particular, the translation approach that was developed by Paul Garvin, with whom I was associated, initially at Georgetown University, and later in the Synthetic Intelligence Department of the Ramo-Wooldridge Corporation and successor corporations: Thompson Ramo Wooldridge and Bunker-Ramo.
false
[]
[]
null
null
null
null
1997
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
corpas-pastor-etal-2008-translation
https://aclanthology.org/2008.amta-papers.5
Translation universals: do they exist? A corpus-based NLP study of convergence and simplification
Convergence and simplification are two of the so-called universals in translation studies. The first one postulates that translated texts tend to be more similar than nontranslated texts. The second one postulates that translated texts are simpler, easier-tounderstand than non-translated ones. This paper discusses the results of a project which applies NLP techniques over comparable corpora of translated and nontranslated texts in Spanish seeking to establish whether these two universals hold Corpas Pastor (2008).
false
[]
[]
null
null
null
null
2008
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
brun-2012-learning
https://aclanthology.org/C12-2017
Learning Opinionated Patterns for Contextual Opinion Detection
This paper tackles the problem of polar vocabulary ambiguity. While some opinionated words keep their polarity in any context and/or across any domain (except for the ironic style that goes beyond the present article), some other have an ambiguous polarity which is highly dependent of the context or the domain: in this case, the opinion is generally carried by complex expressions ("patterns") rather than single words. In this paper, we propose and evaluate an original hybrid method, based on syntactic information extraction and clustering techniques, to learn automatically such patterns and integrate them into an opinion detection system.
false
[]
[]
null
null
null
null
2012
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
turton-etal-2021-deriving
https://aclanthology.org/2021.repl4nlp-1.26
Deriving Contextualised Semantic Features from BERT (and Other Transformer Model) Embeddings
Models based on the transformer architecture, such as BERT, have marked a crucial step forward in the field of Natural Language Processing. Importantly, they allow the creation of word embeddings that capture important semantic information about words in context. However, as single entities, these embeddings are difficult to interpret and the models used to create them have been described as opaque. Binder and colleagues proposed an intuitive embedding space where each dimension is based on one of 65 core semantic features. Unfortunately, the space only exists for a small data-set of 535 words, limiting its uses. Previous work (Utsumi, 2018, 2020; Turton et al., 2020) has shown that Binder features can be derived from static embeddings and successfully extrapolated to a large new vocabulary. Taking the next step, this paper demonstrates that Binder features can be derived from the BERT embedding space. This provides two things; (1) semantic feature values derived from contextualised word embeddings and (2) insights into how semantic features are represented across the different layers of the BERT model.
false
[]
[]
null
null
null
null
2021
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
gemes-recski-2021-tuw
https://aclanthology.org/2021.germeval-1.10
TUW-Inf at GermEval2021: Rule-based and Hybrid Methods for Detecting Toxic, Engaging, and Fact-Claiming Comments
This paper describes our methods submitted for the GermEval 2021 shared task on identifying toxic, engaging and factclaiming comments in social media texts (Risch et al., 2021). We explore simple strategies for semi-automatic generation of rule-based systems with high precision and low recall, and use them to achieve slight overall improvements over a standard BERT-based classifier.
true
[]
[]
Peace, Justice and Strong Institutions
null
null
Research conducted in collaboration with Botium GmbH.
2021
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
true
false
temnikova-cohen-2013-recognizing
https://aclanthology.org/W13-1909
Recognizing Sublanguages in Scientific Journal Articles through Closure Properties
It has long been realized that sublanguages are relevant to natural language processing and text mining. However, practical methods for recognizing or characterizing them have been lacking. This paper describes a publicly available set of tools for sublanguage recognition. Closure properties are used to assess the goodness of fit of two biomedical corpora to the sublanguage model. Scientific journal articles are compared to general English text, and it is shown that the journal articles fit the sublanguage model, while the general English text does not. A number of examples of implications of the sublanguage characteristics for natural language processing are pointed out. The software is made publicly available at [edited for anonymization].
true
[]
[]
Industry, Innovation and Infrastructure
null
null
Irina Temnikova's work on the research reported in this paper was supported by the project AComIn "Advanced Computing for Innovation", grant 316087, funded by the FP7 Capacity Programme (Research Potential of Convergence Re-gions). Kevin Bretonnel Cohen's work was supported by grants NIH 5R01 LM009254-07 and NIH 5R01 LM008111-08 to Lawrence E. Hunter, NIH 1R01MH096906-01A1 to Tal Yarkoni, NIH R01 LM011124 to John Pestian, and NSF IIS-1207592 to Lawrence E. Hunter and Barbara Grimpe. The authors thank Tony McEnery and Andrew Wilson for advice on dealing with the tag sets.
2013
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
krishnamurthy-mitchell-2012-weakly
https://aclanthology.org/D12-1069
Weakly Supervised Training of Semantic Parsers
We present a method for training a semantic parser using only a knowledge base and an unlabeled text corpus, without any individually annotated sentences. Our key observation is that multiple forms of weak supervision can be combined to train an accurate semantic parser: semantic supervision from a knowledge base, and syntactic supervision from dependencyparsed sentences. We apply our approach to train a semantic parser that uses 77 relations from Freebase in its knowledge representation. This semantic parser extracts instances of binary relations with state-of-theart accuracy, while simultaneously recovering much richer semantic structures, such as conjunctions of multiple relations with partially shared arguments. We demonstrate recovery of this richer structure by extracting logical forms from natural language queries against Freebase. On this task, the trained semantic parser achieves 80% precision and 56% recall, despite never having seen an annotated logical form.
false
[]
[]
null
null
null
This research has been supported in part by DARPA under contract number FA8750-09-C-0179, and by a grant from Google. Additionally, we thank Yahoo! for use of their M45 cluster. We also gratefully acknowledge the contributions of our colleagues on the NELL project, Justin Betteridge for collecting the Freebase relations, Jamie Callan and colleagues for the web crawl, and Thomas Kollar and Matt Gardner for helpful comments on earlier drafts of this paper.
2012
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
benotti-blackburn-2021-recipe
https://aclanthology.org/2021.naacl-main.320
A recipe for annotating grounded clarifications
In order to interpret the communicative intents of an utterance, it needs to be grounded in something that is outside of language; that is, grounded in world modalities. In this paper we argue that dialogue clarification mechanisms make explicit the process of interpreting the communicative intents of the speaker's utterances by grounding them in the various modalities in which the dialogue is situated. This paper frames dialogue clarification mechanisms as an understudied research problem and a key missing piece in the giant jigsaw puzzle of natural language understanding. We discuss both the theoretical background and practical challenges posed by this problem, and propose a recipe for obtaining grounding annotations. We conclude by highlighting ethical issues that need to be addressed in future work. 1 We are suspicious of the common assumption that requests for information regarding references that are grounded in vision (e.g. the red or the blue jacket?) are clarifications, whereas requests for information grounded in other modalities are not (e.g. do I take the stairs up or down?). 2 See also the supplement on ethical considerations.
false
[]
[]
null
null
null
We thank the anonymous reviewers for their detailed reviews and insightful comments.
2021
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
chen-etal-2022-discrete
https://aclanthology.org/2022.acl-long.145
Discrete Opinion Tree Induction for Aspect-based Sentiment Analysis
Dependency trees have been intensively used with graph neural networks for aspect-based sentiment classification. Though being effective, such methods rely on external dependency parsers, which can be unavailable for low-resource languages or perform worse in low-resource domains. In addition, dependency trees are also not optimized for aspect-based sentiment classification. In this paper, we propose an aspect-specific and language-agnostic discrete latent opinion tree model as an alternative structure to explicit dependency trees. To ease the learning of complicated structured latent variables, we build a connection between aspect-to-context attention scores and syntactic distances, inducing trees from the attention scores. Results on six English benchmarks, one Chinese dataset and one Korean dataset show that our model can achieve competitive performance and interpretability.
false
[]
[]
null
null
null
Zhiyang Teng and Yue Zhang are the corresponding authors. Our thanks to anonymous reviewers for their insightful comments and suggestions. We appreciate Prof. Pengyuan Liu sharing the Chinese Hotel dataset, Prof. Jingjing Wang sharing the reinforcement learning code of Wang et al. 2019 Wu et al. (2020) upon our request. We thank Dr. Xuebin Wang for providing us with 2 V100 GPU cards for use. This publication is conducted with the financial support of "Pioneer" and "Leading Goose" R&D Program of Zhejiang under Grant Number 2022SDXHDX0003.
2022
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
espla-gomis-etal-2016-ualacant
https://aclanthology.org/W16-2383
UAlacant word-level and phrase-level machine translation quality estimation systems at WMT 2016
This paper describes the Universitat d'Alacant submissions (labeled as UAlacant) to the machine translation quality estimation (MTQE) shared task at WMT 2016, where we have participated in the word-level and phrase-level MTQE subtasks. Our systems use external sources of bilingual information as a black box to spot sub-segment correspondences between the source segment and the translation hypothesis. For our submissions, two sources of bilingual information have been used: machine translation (Lucy LT KWIK Translator and Google Translate) and the bilingual concordancer Reverso Context. Building upon the word-level approach implemented for WMT 2015, a method for phrase-based MTQE is proposed which builds on the probabilities obtained for word-level MTQE. For each sub-task we have submitted two systems: one using the features produced exclusively based on online sources of bilingual information, and one combining them with the baseline features provided by the organisers of the task.
false
[]
[]
null
null
null
Work partially funded by the European Commission through project PIAP-GA-2012-324414 (Abu-MaTran) and by the Spanish government through project TIN2015-69632-R (Effortune). We specially thank Reverso-Softissimo and Prompsit Language Engineering for providing the access to the Reverso Context concordancer, the University Research Program for Google Translate that granted us access to the Google Translate service, and Anna Civil from Lucy Software for providing access to the Lucy LT machine translation system.
2016
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
ceska-fox-2009-influence
https://aclanthology.org/R09-1011
The Influence of Text Pre-processing on Plagiarism Detection
This paper explores the influence of text preprocessing techniques on plagiarism detection. We examine stop-word removal, lemmatization, number replacement, synonymy recognition, and word generalization. We also look into the influence of punctuation and word-order within N-grams. All these techniques are evaluated according to their impact on F1-measure and speed of execution. Our experiments were performed on a Czech corpus of plagiarized documents about politics. At the end of this paper, we propose what we consider to be the best combination of text pre-processing techniques.
true
[]
[]
Industry, Innovation and Infrastructure
Peace, Justice and Strong Institutions
null
This research was supported in part by National Research Programme II, project 2C06009 (COT-SEWing). Special thanks go to Michal Toman who helped us to employ the disambiguation process.
2009
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
true
false
lefrancois-gandon-2013-reasoning
https://aclanthology.org/W13-3719
Reasoning with Dependency Structures and Lexicographic Definitions Using Unit Graphs
We are interested in a graph-based Knowledge Representation (KR) formalism that would allow for the representation, manipulation, query, and reasoning over dependency structures, and linguistic knowledge of the lexicon in the Meaning-Text Theory framework. Neither the semantic web formalisms nor the conceptual graphs appear to be suitable for this task, and this led to the introduction of the new Unit Graphs (UG) framework. In this paper we will overview the foundational concepts of this framework: the UGs are defined over a UG-support that contains: i) a hierarchy of unit types which is strongly driven by the actantial structure of unit types, ii) a hierarchy of circumstantial symbols, and iii) a set of unit identifiers. Based on these foundational concepts and on the definition of UGs, this paper justifies the use of a deep semantic representation level to represent meanings of lexical units. Rules over UGs are then introduced, and lexicographic definitions of lexical units are added to the hierarchy of unit types. Finally this paper provides UGs with semantics (in the logical sense), and pose the entailment problem, so as to enable the reasoning in the UGs framework.
false
[]
[]
null
null
null
null
2013
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
gali-etal-2008-aggregating
https://aclanthology.org/I08-5005
Aggregating Machine Learning and Rule Based Heuristics for Named Entity Recognition
This paper, submitted as an entry for the NERSSEAL-2008 shared task, describes a system build for Named Entity Recognition for South and South East Asian Languages. Our paper combines machine learning techniques with language specific heuristics to model the problem of NER for Indian languages. The system has been tested on five languages: Telugu, Hindi, Bengali, Urdu and Oriya. It uses CRF (Conditional Random Fields) based machine learning, followed by post processing which involves using some heuristics or rules. The system is specifically tuned for Hindi and Telugu, we also report the results for the other four languages.
false
[]
[]
null
null
null
We would like to thank the organizer Mr. Anil Kumar Singh deeply for his continuous support during the shared task.
2008
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
osborne-2013-distribution
https://aclanthology.org/W13-3730
The Distribution of Floating Quantifiers: A Dependency Grammar Analysis
This contribution provides a dependency grammar analysis of the distribution of floating quantifiers in English and German. Floating quantifiers are deemed to be "base generated", meaning that they are not moved into their surface position by a transformation. Their distribution is similar to that of modal adverbs. The nominal (noun or pronoun) over which they quantify is an argument of the predicate to which they attach. Variation in their placement across English and German is due to independent word order principles associated with each language.
false
[]
[]
null
null
null
null
2013
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
hossain-etal-2021-nlp-cuet
https://aclanthology.org/2021.ltedi-1.25
NLP-CUET@LT-EDI-EACL2021: Multilingual Code-Mixed Hope Speech Detection using Cross-lingual Representation Learner
In recent years, several systems have been developed to regulate the spread of negativity and eliminate aggressive, offensive or abusive contents from the online platforms. Nevertheless, a limited number of researches carried out to identify positive, encouraging and supportive contents. In this work, our goal is to identify whether a social media post/comment contains hope speech or not. We propose three distinct models to identify hope speech in English, Tamil and Malayalam language to serve this purpose. To attain this goal, we employed various machine learning (support vector machine, logistic regression, ensemble), deep learning (convolutional neural network + long short term memory) and transformer (m-BERT, Indic-BERT, XLNet, XLM-Roberta) based methods. Results indicate that XLM-Roberta outdoes all other techniques by gaining a weighted f 1-score of 0.93, 0.60 and 0.85 respectively for English, Tamil and Malayalam language. Our team has achieved 1 st , 2 nd and 1 st rank in these three tasks respectively.
true
[]
[]
Good Health and Well-Being
null
null
null
2021
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
false
morales-etal-2007-multivariate
https://aclanthology.org/W07-2421
Multivariate Cepstral Feature Compensation on Band-limited Data for Robust Speech Recognition
This paper describes a new method for compensating bandwidth mismatch for automatic speech recognition using multivariate linear combinations of feature vector components. It is shown that multivariate compensation is superior to methods based on linear compensations of individual features. Performance is evaluated on a real microphone-telephone mismatch condition (this involves noise compensation and bandwidth extension of real data), as well as on several artificial bandwidth limitations. Speech recognition accuracy using this approach is similar to that of acoustic model compensation methods for small to moderate mismatches, and allows keeping active a single acoustic model set for multiple bandwidth limitations.
false
[]
[]
null
null
null
This research is supported in part by an MCyT project (TIC 2006-13141-C03).
2007
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
jansche-2003-parametric
https://aclanthology.org/P03-1037
Parametric Models of Linguistic Count Data
It is well known that occurrence counts of words in documents are often modeled poorly by standard distributions like the binomial or Poisson. Observed counts vary more than simple models predict, prompting the use of overdispersed models like Gamma-Poisson or Beta-binomial mixtures as robust alternatives. Another deficiency of standard models is due to the fact that most words never occur in a given document, resulting in large amounts of zero counts. We propose using zeroinflated models for dealing with this, and evaluate competing models on a Naive Bayes text classification task. Simple zero-inflated models can account for practically relevant variation, and can be easier to work with than overdispersed models.
false
[]
[]
null
null
null
Thanks to Chris Brew and three anonymous reviewers for valuable feedback. Cue the usual disclaimers.
2003
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
collier-etal-1998-refining
https://aclanthology.org/W98-1109
Refining the Automatic Identification of Conceptual Relations in Large-scale Corpora
In the ACRONYM Project, we have taken the Firthian view (e.g. Firth 1957) that context is part of the meaning of the word, and measured similarity of meaning between words through second-order collocation. Using large-scale, free text corpora of UK journalism, we have generated collocational data for all words except for highfrequency grammatical words, and have found that semantically related word pairings can be identified, whilst syntactic relations are disfavoured. We have then moved on to refine this system, to deal with multi-word terms and identify changing conceptual relationships across time. The system, conceived in the late 80's and developed in 1994-97, differs from others of the 90's in purpose, scope, methodology and results, and comparisons will be drawn in the course of the paper.
false
[]
[]
null
null
null
null
1998
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
bjerva-augenstein-2018-phonology
https://aclanthology.org/N18-1083
From Phonology to Syntax: Unsupervised Linguistic Typology at Different Levels with Language Embeddings
A core part of linguistic typology is the classification of languages according to linguistic properties, such as those detailed in the World Atlas of Language Structure (WALS). Doing this manually is prohibitively time-consuming, which is in part evidenced by the fact that only 100 out of over 7,000 languages spoken in the world are fully covered in WALS. We learn distributed language representations, which can be used to predict typological properties on a massively multilingual scale. Additionally, quantitative and qualitative analyses of these language embeddings can tell us how language similarities are encoded in NLP models for tasks at different typological levels. The representations are learned in an unsupervised manner alongside tasks at three typological levels: phonology (grapheme-to-phoneme prediction, and phoneme reconstruction), morphology (morphological inflection), and syntax (part-of-speech tagging). We consider more than 800 languages and find significant differences in the language representations encoded, depending on the target task. For instance, although Norwegian Bokmål and Danish are typologically close to one another, they are phonologically distant, which is reflected in their language embeddings growing relatively distant in a phonological task. We are also able to predict typological features in WALS with high accuracies, even for unseen language families.
false
[]
[]
null
null
null
We would also like to thank RobertÖstling for giving us access to the pre-trained language embeddings. Isabelle Augenstein is supported by Eurostars grant Number E10138. We further gratefully acknowledge the support of NVIDIA Corporation with the donation of the Titan Xp GPU used for this research.
2018
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
godard-etal-2018-adaptor
https://aclanthology.org/W18-5804
Adaptor Grammars for the Linguist: Word Segmentation Experiments for Very Low-Resource Languages
Computational Language Documentation attempts to make the most recent research in speech and language technologies available to linguists working on language preservation and documentation. In this paper, we pursue two main goals along these lines. The first is to improve upon a strong baseline for the unsupervised word discovery task on two very lowresource Bantu languages, taking advantage of the expertise of linguists on these particular languages. The second consists in exploring the Adaptor Grammar framework as a decision and prediction tool for linguists studying a new language. We experiment 162 grammar configurations for each language and show that using Adaptor Grammars for word segmentation enables us to test hypotheses about a language. Specializing a generic grammar with language specific knowledge leads to great improvements for the word discovery task, ultimately achieving a leap of about 30% token F-score from the results of a strong baseline.
false
[]
[]
null
null
null
We thank the anonymous reviewers for their insightful comments. We also thank Ramy Eskander for his help in the early stages of this research. This work was partly funded by French ANR and German DFG under grant ANR-14-CE35-0002 (BULB project).
2018
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
king-2008-osu
https://aclanthology.org/W08-1137
OSU-GP: Attribute Selection Using Genetic Programming
This system's approach to the attribute selection task was to use a genetic programming algorithm to search for a solution to the task. The evolved programs for the furniture and people domain exhibit quite naive behavior, and the DICE and MASI scores on the training sets reflect the poor humanlikeness of the programs.
false
[]
[]
null
null
null
null
2008
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
herbelot-2020-solve
https://aclanthology.org/2020.conll-1.27
Re-solve it: simulating the acquisition of core semantic competences from small data
Many tasks are considered to be 'solved' in the computational linguistics literature, but the corresponding algorithms operate in ways which are radically different from human cognition. I illustrate this by coming back to the notion of semantic competence, which includes basic linguistic skills encompassing both referential phenomena and generic knowledge, in particular a) the ability to denote, b) the mastery of the lexicon, or c) the ability to model one's language use on others. Even though each of those faculties has been extensively tested individually, there is still no computational model that would account for their joint acquisition under the conditions experienced by a human. In this paper, I focus on one particular aspect of this problem: the amount of linguistic data available to the child or machine. I show that given the first competence mentioned above (a denotation function), the other two can in fact be learned from very limited data (2.8M token), reaching state-of-theart performance. I argue that both the nature of the data and the way it is presented to the system matter to acquisition.
false
[]
[]
null
null
null
I thank Ann Copestake and Katrin Erk for reading an early draft of this paper, as well as the participants to the GeCKo workshop in Barcelona for their helpful comments. I would also like to thank the anonymous reviewers for their helpful suggestions and comments. Finally, I gratefully acknowledge the support of NVIDIA Corporation with the donation of the Titan V GPU used for this research.
2020
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
agirre-martinez-2000-exploring
https://aclanthology.org/W00-1702
Exploring Automatic Word Sense Disambiguation with Decision Lists and the Web
The most effective paradigm for word sense disambiguation, supervised learning, seems to be stuck because of the knowledge acquisition bottleneck. In this paper we take an in-depth study of the performance of decision lists on two publicly available corpora and an additional corpus automatically acquired from the Web, using the fine-grained highly polysemous senses in WordNet. Decision lists are shown a versatile state-of-the-art technique. The experiments reveal, among other facts, that SemCor can be an acceptable (0.7 precision for polysemous words) starting point for an all-words system. The results on the DSO corpus show that for some highly polysemous words 0.7 precision seems to be the current state-of-the-art limit. On the other hand, independently constructed hand-tagged corpora are not mutually useful, and a corpus automatically acquired from the Web is shown to fail. 'church1' => GLOSS 'a group of Christians' Why is one >> church << satisfied and the other oppressed ? : 'church2' => MONOSEMOUS SYNONYM 'church building' The result was a congregation formed at that place, and a >> church << erected. :
false
[]
[]
null
null
null
The work here presented received funds from projects OF319-99 (Government of Gipuzkoa), EX1998-30 (Basque Country Government) and 2FD1997-1503 (European Commission).
2000
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
reynaert-2014-ticclops
https://aclanthology.org/C14-2012
TICCLops: Text-Induced Corpus Clean-up as online processing system
We present the 'online processing system' version of Text-Induced Corpus Clean-up, a web service and application open for use to researchers. The system has over the past years been developed to provide mainly OCR error post-correction, but can just as fruitfully be employed to automatically correct texts for spelling errors, or to transcribe texts in an older spelling into the modern variant of the language. It has recently been re-implemented as a distributable and scalable software system in C++, designed to be easily adaptable for use with a broad range of languages and diachronical language varieties. Its new code base is now fit for production work and to be released as open source.
false
[]
[]
null
null
null
The author, Martin Reynaert, and TiCC senior scientific programmer Ko van der Sloot gratefully acknowledge support from CLARIN-NL in projects @PhilosTEI (CLARIN-NL-12-006) and OpenSoNaR (CLARIN-NL-12-013). The author further acknowledges support from NWO in project Nederlab.
2014
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
malmaud-etal-2015-whats
https://aclanthology.org/N15-1015
What's Cookin'? Interpreting Cooking Videos using Text, Speech and Vision
We present a novel method for aligning a sequence of instructions to a video of someone carrying out a task. In particular, we focus on the cooking domain, where the instructions correspond to the recipe. Our technique relies on an HMM to align the recipe steps to the (automatically generated) speech transcript. We then refine this alignment using a state-of-the-art visual food detector, based on a deep convolutional neural network. We show that our technique outperforms simpler techniques based on keyword spotting. It also enables interesting applications, such as automatically illustrating recipes with keyframes, and searching within a video for events of interest.
false
[]
[]
null
null
null
Acknowledgments. We would like to thank Alex Gorban and Anoop Korattikara for helping with some of the experiments, and Nancy Chang for feedback on the paper.
2015
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
kubler-2008-page
https://aclanthology.org/W08-1008
The PaGe 2008 Shared Task on Parsing German
The ACL 2008 Workshop on Parsing German features a shared task on parsing German. The goal of the shared task was to find reasons for the radically different behavior of parsers on the different treebanks and between constituent and dependency representations. In this paper, we describe the task and the data sets. In addition, we provide an overview of the test results and a first analysis.
false
[]
[]
null
null
null
First and foremost, we want to thank all the people and organizations that generously provided us with treebank data and without whom the shared task would have been literally impossible: Erhard Hinrichs, University of Tübingen (TüBa-D/Z), and Hans Uszkoreit, Saarland University and DFKI (TIGER).Secondly, we would like to thank Wolfgang Maier and Yannick Versley who performed the data conversions necessary for the shared task. Additionally, Wolfgang provided the scripts for the constituent evaluation.
2008
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
aksu-etal-2022-n
https://aclanthology.org/2022.findings-acl.131
N-Shot Learning for Augmenting Task-Oriented Dialogue State Tracking
Augmentation of task-oriented dialogues has followed standard methods used for plain-text such as back-translation, word-level manipulation, and paraphrasing despite its richly annotated structure. In this work, we introduce an augmentation framework that utilizes belief state annotations to match turns from various dialogues and form new synthetic dialogues in a bottom-up manner. Unlike other augmentation strategies, it operates with as few as five examples. Our augmentation strategy yields significant improvements when both adapting a DST model to a new domain, and when adapting a language model to the DST task, on evaluations with TRADE and TOD-BERT models. Further analysis shows that our model performs better on seen values during training, and it is also more robust to unseen values. We conclude that exploiting belief state annotations enhances dialogue augmentation and results in improved models in n-shot training scenarios.
false
[]
[]
null
null
null
This research was supported by the SINGA scholarship from A*STAR and by the National Research Foundation, Prime Minister's Office, Singapore under its Campus for Research Excellence and Technological Enterprise (CREATE) programme. We would like to thank anonymous reviewers for their insightful feedback on how to improve the paper.
2022
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
radford-etal-2018-adult
https://aclanthology.org/W18-0614
Can adult mental health be predicted by childhood future-self narratives? Insights from the CLPsych 2018 Shared Task
The CLPsych 2018 Shared Task B explores how childhood essays can predict psychological distress throughout the author's life. Our main aim was to build tools to help our psychologists understand the data, propose features and interpret predictions. We submitted two linear regression models: MODELA uses simple demographic and wordcount features, while MODELB uses linguistic, entity, typographic, expert-gazetteer, and readability features. Our models perform best at younger prediction ages, with our best unofficial score at 23 of 0.426 disattenuated Pearson correlation. This task is challenging and although predictive performance is limited, we propose that tight integration of expertise across computational linguistics and clinical psychology is a productive direction.
true
[]
[]
Good Health and Well-Being
null
null
This study was approved by the University of New South Wales Human Research Ethics Advisory Panel (ref. HC180171). We thank the CLPsych reviewers for their thoughtful comments. KMK is funded by the Australian National Health and Medical Research Council (NHMRC) fellowship #1088313. KR is supported by the ARC-NHMRC Dementia Research Development Fellowship #1103312. LL is supported by the Serpentine Foundation Postdoctoral Fellowship. RP is supported by the Dementia Collaborative Research Centre.
2018
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
false
chandrasekaran-etal-2018-punny
https://aclanthology.org/N18-2121
Punny Captions: Witty Wordplay in Image Descriptions
Wit is a form of rich interaction that is often grounded in a specific situation (e.g., a comment in response to an event). In this work, we attempt to build computational models that can produce witty descriptions for a given image. Inspired by a cognitive account of humor appreciation, we employ linguistic wordplay, specifically puns, in image descriptions. We develop two approaches which involve retrieving witty descriptions for a given image from a large corpus of sentences, or generating them via an encoder-decoder neural network architecture. We compare our approach against meaningful baseline approaches via human studies and show substantial improvements. We find that when a human is subject to similar constraints as the model regarding word usage and style, people vote the image descriptions generated by our model to be slightly wittier than human-written witty descriptions. Unsurprisingly, humans are almost always wittier than the model when they are free to choose the vocabulary, style, etc.
false
[]
[]
null
null
null
We thank Shubham Toshniwal for his advice regarding the automatic speech recognition model. This work was supported in part by: a NSF CAREER award, ONR YIP award, ONR Grant N00014-14-12713, PGA Family Foundation award, Google FRA, Amazon ARA, DARPA XAI grant to DP and NVIDIA GPU donations, Google FRA, IBM Faculty Award, and Bloomberg Data Science Research Grant to MB.
2018
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
rothe-etal-2021-simple
https://aclanthology.org/2021.acl-short.89
A Simple Recipe for Multilingual Grammatical Error Correction
This paper presents a simple recipe to train state-of-the-art multilingual Grammatical Error Correction (GEC) models. We achieve this by first proposing a language-agnostic method to generate a large number of synthetic examples. The second ingredient is to use largescale multilingual language models (up to 11B parameters). Once fine-tuned on languagespecific supervised sets we surpass the previous state-of-the-art results on GEC benchmarks in four languages: English, Czech, German and Russian. Having established a new set of baselines for GEC, we make our results easily reproducible and accessible by releasing a CLANG-8 dataset. 1 It is produced by using our best model, which we call gT5, to clean the targets of a widely used yet noisy LANG-8 dataset. CLANG-8 greatly simplifies typical GEC training pipelines composed of multiple fine-tuning stages-we demonstrate that performing a single fine-tuning step on CLANG-8 with the off-the-shelf language models yields further accuracy improvements over an already top-performing gT5 model for English.
false
[]
[]
null
null
null
We would like to thank Costanza Conforti, Shankar Kumar, Felix Stahlberg and Samer Hassan for useful discussions as well as their help with training and evaluating the models.
2021
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
singh-etal-2020-newssweeper
https://aclanthology.org/2020.semeval-1.231
newsSweeper at SemEval-2020 Task 11: Context-Aware Rich Feature Representations for Propaganda Classification
This paper describes our submissions to SemEval 2020 Task 11: Detection of Propaganda Techniques in News Articles for each of the two subtasks of Span Identification and Technique Classification. We make use of pre-trained BERT language model enhanced with tagging techniques developed for the task of Named Entity Recognition (NER), to develop a system for identifying propaganda spans in the text. For the second subtask, we incorporate contextual features in a pre-trained RoBERTa model for the classification of propaganda techniques. We were ranked 5 th in the propaganda technique classification subtask.
true
[]
[]
Peace, Justice and Strong Institutions
null
null
null
2020
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
true
false
aramaki-etal-2007-uth
https://aclanthology.org/S07-1103
UTH: SVM-based Semantic Relation Classification using Physical Sizes
Although researchers have shown increasing interest in extracting/classifying semantic relations, most previous studies have basically relied on lexical patterns between terms. This paper proposes a novel way to accomplish the task: a system that captures a physical size of an entity. Experimental results revealed that our proposed method is feasible and prevents the problems inherent in other methods.
false
[]
[]
null
null
null
null
2007
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
kacmarcik-etal-2000-robust
https://aclanthology.org/C00-1057
Robust Segmentation of Japanese Text into a Lattice for Parsing
We describe a segmentation component that utilizes minimal syntactic knowledge to produce a lattice of word candidates for a broad coverage Japanese NL parser. The segmenter is a finite state morphological analyzer and text normalizer designed to handle the orthographic variations characteristic of written Japanese, including alternate spellings, script variation, vowel extensions and word-internal parenthetical material. This architecture differs from conventional Japanese wordbreakers in that it does not attempt to simultaneously attack the problems of identifying segmentation candidates and choosing the most probable analysis. To minimize duplication of effort between components and to give the segmenter greater freedom to address orthography issues, the task of choosing the best analysis is handled by the parser, which has access to a much richer set of linguistic information. By maximizing recall in the segmenter and allowing a precision of 34.7%, our parser currently achieves a breaking accuracy of ~97% over a wide variety of corpora.
false
[]
[]
null
null
null
null
2000
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
rodriguez-penagos-2004-metalinguistic
https://aclanthology.org/W04-1802
Metalinguistic Information Extraction for Terminology
This paper d escribes and evaluates the Metalinguistic Operation Processor (MOP) system for automatic compilation of metalinguistic information from technical and scientific documents. This system is designed to extract non-standard terminological resources that we have called Metalinguistic Information Databases (or MIDs), in order to help update changing glossaries, knowledge bases and ontologies, as well as to reflect the metastable dynamics of special-domain knowledge.
false
[]
[]
null
null
null
null
2004
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
wang-etal-2022-miner
https://aclanthology.org/2022.acl-long.383
MINER: Improving Out-of-Vocabulary Named Entity Recognition from an Information Theoretic Perspective
NER model has achieved promising performance on standard NER benchmarks. However, recent studies show that previous approaches may over-rely on entity mention information, resulting in poor performance on out-of-vocabulary (OOV) entity recognition. In this work, we propose MINER, a novel NER learning framework, to remedy this issue from an information-theoretic perspective. The proposed approach contains two mutual information-based training objectives: i) generalizing information maximization, which enhances representation via deep understanding of context and entity surface forms; ii) superfluous information minimization, which discourages representation from rote memorizing entity names or exploiting biased cues in data. Experiments on various settings and datasets demonstrate that it achieves better performance in predicting OOV entities.
false
[]
[]
null
null
null
The authors would like to thank the anonymous reviewers for their helpful comments, Ting Wu and Yiding Tan for their early contribution. This work was partially funded by China National Key RD Program (No. 2018YFB1005104), National Natural Science Foundation of China (No. 62076069, 61976056). This research was sponsored by Hikvision Cooperation Fund, Beijing Academy of Artificial Intelligence(BAAI), and CAAI-Huawei MindSpore Open Fund.
2022
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
moore-2004-improving
https://aclanthology.org/P04-1066
Improving IBM Word Alignment Model 1
We investigate a number of simple methods for improving the word-alignment accuracy of IBM Model 1. We demonstrate reduction in alignment error rate of approximately 30% resulting from (1) giving extra weight to the probability of alignment to the null word, (2) smoothing probability estimates for rare words, and (3) using a simple heuristic estimation method to initialize, or replace, EM training of model parameters.
false
[]
[]
null
null
null
null
2004
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
muaz-etal-2009-analysis
https://aclanthology.org/W09-3404
Analysis and Development of Urdu POS Tagged Corpus
In this paper, two corpora of Urdu (with 110K and 120K words) tagged with different POS tagsets are used to train TnT and Tree taggers. Error analysis of both taggers is done to identify frequent confusions in tagging. Based on the analysis of tagging, and syntactic structure of Urdu, a more refined tagset is derived. The existing tagged corpora are tagged with the new tagset to develop a single corpus of 230K words and the TnT tagger is retrained. The results show improvement in tagging accuracy for individual corpora to 94.2% and also for the merged corpus to 91%. Implications of these results are discussed.
false
[]
[]
null
null
null
null
2009
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
lewis-2014-getting
https://aclanthology.org/2014.tc-1.15
Getting the best out of a mixed bag
This paper discusses the development and implementation of an approach to the combination of Rule Based Machine Translation, Statistical Machine Translation and Translation Memory tecnologies. The machine translation system itself draws upon translation memories and both syntactically and statistically generated phrase tables, unresolved sentences being fed to a Rules Engine. The output of the process is a TMX file containing a varying mixture of TMgenerated and MT-generated sentences. The author has designed this workflow using his own language engineering tools written in Java.
false
[]
[]
null
null
null
null
2014
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
knoth-etal-2011-using
https://aclanthology.org/W11-3602
Using Explicit Semantic Analysis for Cross-Lingual Link Discovery
This paper explores how to automatically generate cross-language links between resources in large document collections. The paper presents new methods for Cross-Lingual Link Discovery (CLLD) based on Explicit Semantic Analysis (ESA). The methods are applicable to any multilingual document collection. In this report, we present their comparative study on the Wikipedia corpus and provide new insights into the evaluation of link discovery systems. In particular, we measure the agreement of human annotators in linking articles in different language versions of Wikipedia, and compare it to the results achieved by the presented methods.
false
[]
[]
null
null
null
null
2011
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
sil-yates-2011-extracting
https://aclanthology.org/R11-1001
Extracting STRIPS Representations of Actions and Events
Knowledge about how the world changes over time is a vital component of commonsense knowledge for Artificial Intelligence (AI) and natural language understanding. Actions and events are fundamental components to any knowledge about changes in the state of the world: the states before and after an event differ in regular and predictable ways. We describe a novel system that tackles the problem of extracting knowledge from text about how actions and events change the world over time. We leverage standard language-processing tools, like semantic role labelers and coreference resolvers, as well as large-corpus statistics like pointwise mutual information, to identify STRIPS representations of actions and events, a type of representation commonly used in AI planning systems. In experiments on Web text, our extractor's Area under the Curve (AUC) improves by more than 31% over the closest system from the literature for identifying the preconditions and add effects of actions. In addition, we also extract significant aspects of STRIPS representations that are missing from previous work, including delete effects and arguments.
false
[]
[]
null
null
null
null
2011
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
dinkar-etal-2020-importance
https://aclanthology.org/2020.emnlp-main.641
The importance of fillers for text representations of speech transcripts
While being an essential component of spoken language, fillers (e.g. "um" or "uh") often remain overlooked in Spoken Language Understanding (SLU) tasks. We explore the possibility of representing them with deep contextualised embeddings, showing improvements on modelling spoken language and two downstream tasks-predicting a speaker's stance and expressed confidence.
false
[]
[]
null
null
null
This project has received funding from the European Union's Horizon 2020 research and innovation programme under grant agreement No 765955 and the French National Research Agency's grant ANR-17-MAOI.
2020
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
jing-etal-2019-show
https://aclanthology.org/P19-1657
Show, Describe and Conclude: On Exploiting the Structure Information of Chest X-ray Reports
Chest X-Ray (CXR) images are commonly used for clinical screening and diagnosis. Automatically writing reports for these images can considerably lighten the workload of radiologists for summarizing descriptive findings and conclusive impressions. The complex structures between and within sections of the reports pose a great challenge to the automatic report generation. Specifically, the section Impression is a diagnostic summarization over the section Findings; and the appearance of normality dominates each section over that of abnormality. Existing studies rarely explore and consider this fundamental structure information. In this work, we propose a novel framework which exploits the structure information between and within report sections for generating CXR imaging reports. First, we propose a two-stage strategy that explicitly models the relationship between Findings and Impression. Second, we design a novel cooperative multi-agent system that implicitly captures the imbalanced distribution between abnormality and normality. Experiments on two CXR report datasets show that our method achieves state-of-the-art performance in terms of various evaluation metrics. Our results expose that the proposed approach is able to generate high-quality medical reports through integrating the structure information. Findings: The cardiac silhouette is enlarged and has a globular appearance. Mild bibasilar dependent atelectasis. No pneumothorax or large pleural effusion. No acute bone abnormality. Impression: Cardiomegaly with globular appearance of the cardiac silhouette. Considerations would include pericardial effusion or dilated cardiomyopathy.
true
[]
[]
Good Health and Well-Being
null
null
null
2019
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
false
imperial-ong-2021-microscope
https://aclanthology.org/2021.paclic-1.1
Under the Microscope: Interpreting Readability Assessment Models for Filipino
Readability assessment is the process of identifying the level of ease or difficulty of a certain piece of text for its intended audience. Approaches have evolved from the use of arithmetic formulas to more complex pattern-recognizing models trained using machine learning algorithms. While using these approaches provide competitive results, limited work is done on analyzing how linguistic variables affect model inference quantitatively. In this work, we dissect machine learning-based readability assessment models in Filipino by performing global and local model interpretation to understand the contributions of varying linguistic features and discuss its implications in the context of the Filipino language. Results show that using a model trained with top features from global interpretation obtained higher performance than the ones using features selected by Spearman correlation. Likewise, we also empirically observed local feature weight boundaries for discriminating reading difficulty at an extremely fine-grained level and their corresponding effects if values are perturbed.
false
[]
[]
null
null
null
Acknowledgment The authors would like to thank the anonymous reviewers for their valuable feedback and to Dr. Ani Almario of Adarna House for allowing us to use their children's book dataset for this study. This work is also supported by the DOST National Research Council of the Philippines (NRCP).
2021
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
rutherford-thanyawong-2019-written
https://aclanthology.org/W19-4710
Written on Leaves or in Stones?: Computational Evidence for the Era of Authorship of Old Thai Prose
We aim to provide computational evidence for the era of authorship of two important old Thai texts: Traiphumikatha and Pumratchatham. The era of authorship of these two books is still an ongoing debate among Thai literature scholars. Analysis of old Thai texts present a challenge for standard natural language processing techniques, due to the lack of corpora necessary for building old Thai word and syllable segmentation. We propose an accurate and interpretable model to classify each segment as one of the three eras of authorship (Sukhothai, Ayuddhya, or Rattanakosin) without sophisticated linguistic preprocessing. Contrary to previous hypotheses, our model suggests that both books were written during the Sukhothai era. Moreover, the second half of the Pumratchtham is uncharacteristic of the Sukhothai era, which may have confounded literary scholars in the past. Further, our model reveals that the most indicative linguistic changes stem from unidirectional grammaticalized words and polyfunctional words, which show up as most dominant features in the model.
false
[]
[]
null
null
null
This research is funded by Grants for Development of New Faculty Staff at Chulalongkorn University.
2019
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
garrido-alenda-etal-2002-incremental
https://aclanthology.org/2002.tmi-papers.7
Incremental construction and maintenance of morphological analysers based on augmented letter transducers
We define deterministic augmented letter transducers (DALTs), a class of finitestate transducers which provide an efficient way of implementing morphological analysers which tokenize their input (i.e., divide texts in tokens or words) as they analyse it, and show how these morphological analysers may be maintained (i.e., how surface form-lexical form transductions may be added or removed from them) while keeping them minimal; efficient algorithms for both operations are given in detail. The algorithms may also be applied to the incremental construction and maintentance of other lexical modules in a machine translation system such as the lexical transfer module or the morphological generator.
false
[]
[]
null
null
null
null
2002
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
hamon-etal-1998-step
https://aclanthology.org/C98-1079
A step towards the detection of semantic variants of terms in technical documents
This paper reports the results of a preliminary experiment on the detection of semantic variants of terms in a French technical document. The general goal of our work is to help the structuration of terminologies. Two kinds of semantic variants can be found in traditional terminologies : strict synonymy links and fuzzier relations like see-also. We have designed three rules which exploit general dictionary information to infer synonymy relations between complex candidate terms. The results have been examined by a human terminologist. The expert has judged that half of the overall pairs of terms are relevant for the semantic variation. He validated an important part of the detected links as synonymy. Moreover, it appeared that numerous errors are due to few misinterpreted links: they could be eliminated by few exception rules.
true
[]
[]
Industry, Innovation and Infrastructure
null
null
This work is the result of a collaboration with the Direction des Etudes et Recherche (DER) d'Electricit6 de France (EDF). We thank Marie-Luce Picard from EDF and Beno[t Habert from ENS Fontenay-St Cloud for their help, Didier Bourigault and Jean-Yves Hamon from the Institut de la Langue FranQaise (INaLF) for the dictionary and Henry Boecon-Gibod for the validation of the results.
1998
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false