ID
stringlengths
11
54
url
stringlengths
33
64
title
stringlengths
11
184
abstract
stringlengths
17
3.87k
label_nlp4sg
bool
2 classes
task
sequence
method
sequence
goal1
stringclasses
9 values
goal2
stringclasses
9 values
goal3
stringclasses
1 value
acknowledgments
stringlengths
28
1.28k
year
stringlengths
4
4
sdg1
bool
1 class
sdg2
bool
1 class
sdg3
bool
2 classes
sdg4
bool
2 classes
sdg5
bool
2 classes
sdg6
bool
1 class
sdg7
bool
1 class
sdg8
bool
2 classes
sdg9
bool
2 classes
sdg10
bool
2 classes
sdg11
bool
2 classes
sdg12
bool
1 class
sdg13
bool
2 classes
sdg14
bool
1 class
sdg15
bool
1 class
sdg16
bool
2 classes
sdg17
bool
2 classes
sido-etal-2021-czert
https://aclanthology.org/2021.ranlp-1.149
Czert -- Czech BERT-like Model for Language Representation
This paper describes the training process of the first Czech monolingual language representation models based on BERT and ALBERT architectures. We pre-train our models on more than 340K of sentences, which is 50 times more than multilingual models that include Czech data. We outperform the multilingual models on 9 out of 11 datasets. In addition, we establish the new state-of-the-art results on nine datasets. At the end, we discuss properties of monolingual and multilingual models based upon our results. We publish all the pretrained and fine-tuned models freely for the research community.
false
[]
[]
null
null
null
This work has been partly supported by ERDF "Research and Development of Intelligent Components of Advanced Technologies for the Pilsen Metropolitan Area (InteCom)" (no.: CZ.02.1.01/0.0/0.0/17 048/0007267); and by Grant No. SGS-2019-018 Processing of heterogeneous data and its specialized applications. Computational resources were supplied by the project "e-Infrastruktura CZ" (e-INFRA LM2018140) provided within the program Projects of Large Research, Development and Innovations Infrastructures.
2021
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
gehrmann-etal-2021-gem
https://aclanthology.org/2021.gem-1.10
The GEM Benchmark: Natural Language Generation, its Evaluation and Metrics
We introduce GEM, a living benchmark for natural language Generation (NLG), its Evaluation, and Metrics. Measuring progress in NLG relies on a constantly evolving ecosystem of automated metrics, datasets, and human evaluation standards. Due to this moving target, new models often still evaluate on divergent anglo-centric corpora with wellestablished, but flawed, metrics. This disconnect makes it challenging to identify the limitations of current models and opportunities for progress. Addressing this limitation, GEM provides an environment in which models can easily be applied to a wide set of tasks and in which evaluation strategies can be tested. Regular updates to the benchmark will help NLG research become more multilingual and evolve the challenge alongside models. This paper serves as the description of the data for which we are organizing a shared task at our ACL 2021 Workshop and to which we invite the entire NLG community to participate.
false
[]
[]
null
null
null
The authors of this paper not named in the groups participated in initial discussions, participated in the surveys, and provided regular feedback and guidance. Many participants commented on and helped write this paper. We additionally thank all participants of INLG 2019, the Generation Birdsof-a-Feather meeting at ACL 2020, the EvalNL-GEval Workshop at INLG 2020, and members of the generation challenge mailing list of SIGGEN for their participation in the discussions that inspired and influenced the creation of GEM.
2021
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
novak-novak-2021-transfer
https://aclanthology.org/2021.ranlp-1.119
Transfer-based Enrichment of a Hungarian Named Entity Dataset
In this paper, we present a major update to the first Hungarian named entity dataset, the Szeged NER corpus. We used zero-shot crosslingual transfer to initialize the enrichment of entity types annotated in the corpus using three neural NER models: two of them based on the English OntoNotes corpus and one based on the Czech Named Entity Corpus fine-tuned from multilingual neural language models. The output of the models was automatically merged with the original NER annotation, and automatically and manually corrected and further enriched with additional annotation, like qualifiers for various entity types. We present the evaluation of the zero-shot performance of the two OntoNotes-based models and a transformer-based new NER model trained on the training part of the final corpus. We release the corpus and the trained model.
false
[]
[]
null
null
null
This research was implemented with support provided by grants FK 125217 and PD 125216 of the National Research, Development and Innovation Office of Hungary financed under the FK 17 and PD 17 funding schemes as well as through the Artificial Intelligence National Excellence Program (grant no.: 2018-1.2.1-NKP-2018-00008).
2021
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
hu-etal-2019-texar
https://aclanthology.org/P19-3027
Texar: A Modularized, Versatile, and Extensible Toolkit for Text Generation
We introduce Texar, an open-source toolkit aiming to support the broad set of text generation tasks that transform any inputs into natural language, such as machine translation, summarization, dialog, content manipulation, and so forth. With the design goals of modularity, versatility, and extensibility in mind, Texar extracts common patterns underlying the diverse tasks and methodologies, creates a library of highly reusable modules and functionalities, and allows arbitrary model architectures and algorithmic paradigms. In Texar, model architecture, inference, and learning processes are properly decomposed. Modules at a high concept level can be freely assembled or plugged in/swapped out. Texar is thus particularly suitable for researchers and practitioners to do fast prototyping and experimentation. The versatile toolkit also fosters technique sharing across different text generation tasks. Texar supports both TensorFlow and PyTorch, and is released under Apache License 2.0 at https: //www.texar.io. 1
false
[]
[]
null
null
null
null
2019
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
zhao-huang-1998-quasi
https://aclanthology.org/C98-1001
A Quasi-Dependency Model for Structural Analysis it of Chinese BaseNPs
The paper puts forward a quasidependency model for structural analysis of Chinese baseNPs and a MDL-based algorithm for quasidependency-strength acquisition. The experiments show that the proposed model is more suitable for Chinese baseNP analysis and the proposed MDLbased algorithm is superior to the traditional MLbased algorithm. The paper also discusses the problem of incorporating the linguistic knowledge into the above statistical model.
false
[]
[]
null
null
null
null
1998
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
filimonov-harper-2007-recovery
https://aclanthology.org/D07-1065
Recovery of Empty Nodes in Parse Structures
In this paper, we describe a new algorithm for recovering WH-trace empty nodes. Our approach combines a set of handwritten patterns together with a probabilistic model. Because the patterns heavily utilize regular expressions, the pertinent tree structures are covered using a limited number of patterns. The probabilistic model is essentially a probabilistic context-free grammar (PCFG) approach with the patterns acting as the terminals in production rules. We evaluate the algorithm's performance on gold trees and parser output using three different metrics. Our method compares favorably with state-of-the-art algorithms that recover WH-traces.
false
[]
[]
null
null
null
We would like to thank Ryan Gabbard for providing us output from his algorithm for evaluation. We would also like to thank the anonymous reviewers for invaluable comments. This material is based upon work supported by the Defense Advanced Research Projects Agency (DARPA) under Contract No. HR0011-06-C-0023. Any opinions, findings and conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect the views of DARPA.
2007
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
linzen-2020-accelerate
https://aclanthology.org/2020.acl-main.465
How Can We Accelerate Progress Towards Human-like Linguistic Generalization?
This position paper describes and critiques the Pretraining-Agnostic Identically Distributed (PAID) evaluation paradigm, which has become a central tool for measuring progress in natural language understanding. This paradigm consists of three stages: (1) pretraining of a word prediction model on a corpus of arbitrary size; (2) fine-tuning (transfer learning) on a training set representing a classification task; (3) evaluation on a test set drawn from the same distribution as that training set. This paradigm favors simple, low-bias architectures, which, first, can be scaled to process vast amounts of data, and second, can capture the fine-grained statistical properties of a particular data set, regardless of whether those properties are likely to generalize to examples of the task outside the data set. This contrasts with humans, who learn language from several orders of magnitude less data than the systems favored by this evaluation paradigm, and generalize to new tasks in a consistent way. We advocate for supplementing or replacing PAID with paradigms that reward architectures that generalize as quickly and robustly as humans.
false
[]
[]
null
null
null
null
2020
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
jin-hauptmann-2002-new
https://aclanthology.org/C02-1137
A New Probabilistic Model for Title Generation
Title generation is a complex task involving both natural language understanding and natural language synthesis. In this paper, we propose a new probabilistic model for title generation. Different from the previous statistical models for title generation, which treat title generation as a generation process that converts the 'document representation' of information directly into a 'title representation' of the same information, this model introduces a hidden state called 'information source' and divides title generation into two steps, namely the step of distilling the 'information source' from the observation of a document and the step of generating a title from the estimated 'information source'. In our experiment, the new probabilistic model outperforms the previous model for title generation in terms of both automatic evaluations and human judgments.
false
[]
[]
null
null
null
The authors are grateful to the anonymous reviewers for their comments, which have helped improve the quality of the paper. This material is based in part on work supported by National Science Foundation under Cooperative Agreement No. IRI-9817496. Partial support for this work was provided by the National Science Foundation's National Science, Mathematics, Engineering, and Technology Education Digital Library Program under grant DUE-0085834. This work was also supported in part by the Advanced Research and Development Activity (ARDA) under contract number MDA908-00-C-0037. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect the views of the National Science Foundation or ARDA.
2002
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
kozhevnikov-titov-2014-cross
https://aclanthology.org/P14-2095
Cross-lingual Model Transfer Using Feature Representation Projection
We propose a novel approach to crosslingual model transfer based on feature representation projection. First, a compact feature representation relevant for the task in question is constructed for either language independently and then the mapping between the two representations is determined using parallel data. The target instance can then be mapped into the source-side feature representation using the derived mapping and handled directly by the source-side model. This approach displays competitive performance on model transfer for semantic role labeling when compared to direct model transfer and annotation projection and suggests interesting directions for further research.
false
[]
[]
null
null
null
The authors would like to acknowledge the support of MMCI Cluster of Excellence and Saarbrücken Graduate School of Computer Science and thank the anonymous reviewers for their suggestions.
2014
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
knight-sproat-2009-writing
https://aclanthology.org/N09-4008
Writing Systems, Transliteration and Decipherment
Kevin Knight (USC/ISI) Richard Sproat (CSLU/OHSU) Nearly all of the core data that computational linguists deal with is in the form of text, which is to say that it consists of language data written (usually) in the standard writing system for the language in question. Yet surprisingly little is generally understood about how writing systems work. This tutorial will be divided into three parts. In the first part we discuss the history of writing and introduce a wide variety of writing systems, explaining their structure and how they encode language. We end this section with a brief review of how some of the properties of writing systems are handled in modern encoding systems, such as Unicode, and some of the continued pitfalls that can occur despite the best intentions of standardization. The second section of the tutorial will focus on the problem of transcription between scripts (often termed "transliteration"), and how this problem-which is important both for machine translation and named entity recognition-has been addressed. The third section is more theoretical and, at the same time we hope, more fun. We will discuss the problem of decipherment and how computational methods might be brought to bear on the problem of unlocking the mysteries of as yet undeciphered ancient scripts. We start with a brief review of three famous cases of decipherment. We then discuss how techniques that have been used in speech recognition and machine translation might be applied to the problem of decipherment. We end with a survey of the as-yet undeciphered ancient scripts and give some sense of the prospects of deciphering them given currently available data.
false
[]
[]
null
null
null
null
2009
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
holan-etal-1998-two
https://aclanthology.org/W98-0503
Two Useful Measures of Word Order Complexity
This paper presents a class of dependency-based forreal grammars (FODG) which can be parametrized by two different but similar measures of nonprojectivity. The measures allow to formulate constraints on the degree of word-order freedom in a language described by a FODG. We discuss the problem of the degree of word-order freedom which should be allowed ~, a FODG describing the (surface) syntax of Czech.
false
[]
[]
null
null
null
null
1998
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
mohammad-etal-2018-semeval
https://aclanthology.org/S18-1001
SemEval-2018 Task 1: Affect in Tweets
We present the SemEval-2018 Task 1: Affect in Tweets, which includes an array of subtasks on inferring the affectual state of a person from their tweet. For each task, we created labeled data from English, Arabic, and Spanish tweets. The individual tasks are: 1. emotion intensity regression, 2. emotion intensity ordinal classification, 3. valence (sentiment) regression, 4. valence ordinal classification, and 5. emotion classification. Seventy-five teams (about 200 team members) participated in the shared task. We summarize the methods, resources, and tools used by the participating teams, with a focus on the techniques and resources that are particularly useful. We also analyze systems for consistent bias towards a particular race or gender. The data is made freely available to further improve our understanding of how people convey emotions through language.
false
[]
[]
null
null
null
null
2018
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
miculicich-henderson-2022-graph
https://aclanthology.org/2022.findings-acl.215
Graph Refinement for Coreference Resolution
The state-of-the-art models for coreference resolution are based on independent mention pairwise decisions. We propose a modelling approach that learns coreference at the documentlevel and takes global decisions. For this purpose, we model coreference links in a graph structure where the nodes are tokens in the text, and the edges represent the relationship between them. Our model predicts the graph in a non-autoregressive manner, then iteratively refines it based on previous predictions, allowing global dependencies between decisions. The experimental results show improvements over various baselines, reinforcing the hypothesis that document-level information improves conference resolution.
false
[]
[]
null
null
null
This work was supported in part by the Swiss National Science Foundation, under grants 200021_178862 and CRSII5_180320.
2022
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
prasad-etal-2008-towards
https://aclanthology.org/I08-7010
Towards an Annotated Corpus of Discourse Relations in Hindi
We describe our initial efforts towards developing a large-scale corpus of Hindi texts annotated with discourse relations. Adopting the lexically grounded approach of the Penn Discourse Treebank (PDTB), we present a preliminary analysis of discourse connectives in a small corpus. We describe how discourse connectives are represented in the sentence-level dependency annotation in Hindi, and discuss how the discourse annotation can enrich this level for research and applications. The ultimate goal of our work is to build a Hindi Discourse Relation Bank along the lines of the PDTB. Our work will also contribute to the cross-linguistic understanding of discourse connectives.
false
[]
[]
null
null
null
null
2008
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
kunz-etal-2021-heicic
https://aclanthology.org/2021.motra-1.2
HeiCiC: A simultaneous interpreting corpus combining product and pre-process data
This paper presents HeiCIC, a simultaneous interpreting corpus that comprises audio files, time-aligned transcripts and corresponding preparation material complemented by annotation layers. The corpus serves the pursuit of a range of research questions focusing on strategic cognitive load management and its effects on the interpreting output. One research objective is the analysis of semantic transfer as a function of problem triggers in the source text which represent potential cognitive load peaks. Another research approach correlates problem triggers with solution cues in the visual support material used by interpreters in the booth. Interpreting strategies based on this priming reduce cognitive load during SI.
false
[]
[]
null
null
null
null
2021
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
chang-etal-2021-nao
https://aclanthology.org/2021.ccl-1.57
脑卒中疾病电子病历实体及实体关系标注语料库构建(Corpus Construction for Named-Entity and Entity Relations for Electronic Medical Records of Stroke Disease)
This paper discussed the labeling of Named-Entity and Entity Relations in Chinese electronic medical records of stroke disease, and proposes a system and norms for labeling entity and entity relations that are suitable for content and characteristics of electronic medical records of stroke disease. Based on the guidance of the labeling system and norms, this carried out several rounds of manual tagging and proofreading and completed the labeling of entities and relationships more than 1.5 million words. The entity and entity relationship tagging corpus of stroke electronic medical record(Stroke Electronic Medical Record entity and entity related Corpus, SEMRC)is fromed. The constructed corpus contains 10,594 named entities and 14,457 entity relationships. The consistency of named entity reached 85.16%, and that of entity relationship reached 94.16%.
true
[]
[]
Good Health and Well-Being
null
null
null
2021
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
false
lee-chang-2003-acquisition
https://aclanthology.org/W03-0317
Acquisition of English-Chinese Transliterated Word Pairs from Parallel-Aligned Texts using a Statistical Machine Transliteration Model
This paper presents a framework for extracting English and Chinese transliterated word pairs from parallel texts. The approach is based on the statistical machine transliteration model to exploit the phonetic similarities between English words and corresponding Chinese transliterations. For a given proper noun in English, the proposed method extracts the corresponding transliterated word from the aligned text in Chinese. Under the proposed approach, the parameters of the model are automatically learned from a bilingual proper name list. Experimental results show that the average rates of word and character precision are 86.0% and 94.4%, respectively. The rates can be further improved with the addition of simple linguistic processing.
false
[]
[]
null
null
null
null
2003
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
zilio-etal-2017-using
https://doi.org/10.26615/978-954-452-049-6_107
Using NLP for Enhancing Second Language Acquisition
null
true
[]
[]
Quality Education
null
null
null
2017
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
nasr-rambow-2004-simple
https://aclanthology.org/W04-1503
A Simple String-Rewriting Formalism for Dependency Grammar
Recently, dependency grammar has gained renewed attention as empirical methods in parsing have emphasized the importance of relations between words, which is what dependency grammars model explicitly, but context-free phrase-structure grammars do not. While there has been much work on formalizing dependency grammar and on parsing algorithms for dependency grammars in the past, there is not a complete generative formalization of dependency grammar based on string-rewriting in which the derivation structure is the desired dependency structure. Such a system allows for the definition of a compact parse forest in a straightforward manner. In this paper, we present a simple generative formalism for dependency grammars based on Extended Context-Free Grammar, along with a parser; the formalism captures the intuitions of previous formalizations while deviating minimally from the much-used Context-Free Grammar.
false
[]
[]
null
null
null
null
2004
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
zhao-etal-2010-automatic
https://aclanthology.org/C10-2171
Automatic Temporal Expression Normalization with Reference Time Dynamic-Choosing
Temporal expressions in texts contain significant temporal information. Understanding temporal information is very useful in many NLP applications, such as information extraction, documents summarization and question answering. Therefore, the temporal expression normalization which is used for transforming temporal expressions to temporal information has absorbed many researchers' attentions. But previous works, whatever the hand-crafted rules-based or the machine-learnt rules-based, all can not address the actual problem about temporal reference in real texts effectively. More specifically, the reference time choosing mechanism employed by these works is not adaptable to the universal implicit times in normalization. Aiming at this issue, we introduce a new reference time choosing mechanism for temporal expression normalization, called reference time dynamic-choosing, which assigns the appropriate reference times to different classes of implicit temporal expressions dynamically when normalizing. And then, the solution to temporal expression defuzzification by scenario dependences among temporal expressions is discussed. Finally, we evaluate the system on a substantial corpus collected by Chinese news articles and obtained more promising results than compared methods.
false
[]
[]
null
null
null
null
2010
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
dolan-etal-2004-unsupervised
https://aclanthology.org/C04-1051
Unsupervised Construction of Large Paraphrase Corpora: Exploiting Massively Parallel News Sources
We investigate unsupervised techniques for acquiring monolingual sentence-level paraphrases from a corpus of temporally and topically clustered news articles collected from thousands of web-based news sources. Two techniques are employed: (1) simple string edit distance, and (2) a heuristic strategy that pairs initial (presumably summary) sentences from different news stories in the same cluster. We evaluate both datasets using a word alignment algorithm and a metric borrowed from machine translation. Results show that edit distance data is cleaner and more easily-aligned than the heuristic data, with an overall alignment error rate (AER) of 11.58% on a similarly-extracted test set. On test data extracted by the heuristic strategy, however, performance of the two training sets is similar, with AERs of 13.2% and 14.7% respectively. Analysis of 100 pairs of sentences from each set reveals that the edit distance data lacks many of the complex lexical and syntactic alternations that characterize monolingual paraphrase. The summary sentences, while less readily alignable, retain more of the non-trivial alternations that are of greatest interest learning paraphrase relationships.
false
[]
[]
null
null
null
We are grateful to the Mo Corston-Oliver, Jeff Stevenson and Amy Muia of the Butler Hill Group for their work in annotating the data used in the experiments. We have also benefited from discussions with Ken Church, Mark Johnson, Daniel Marcu and Franz Och. We remain, however, responsible for all content.
2004
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
ghosh-etal-2020-cease
https://aclanthology.org/2020.lrec-1.201
CEASE, a Corpus of Emotion Annotated Suicide notes in English
A suicide note is usually written shortly before the suicide, and it provides a chance to comprehend the self-destructive state of mind of the deceased. From a psychological point of view, suicide notes have been utilized for recognizing the motive behind the suicide. To the best of our knowledge, there are no openly accessible suicide note corpus at present, making it challenging for the researchers and developers to deep dive into the area of mental health assessment and suicide prevention. In this paper, we create a fine-grained emotion annotated corpus (CEASE) of suicide notes in English, and develop various deep learning models to perform emotion detection on the curated dataset. The corpus consists of 2393 sentences from around 205 suicide notes collected from various sources. Each sentence is annotated with a particular emotion class from a set of 15 fine-grained emotion labels, namely (forgiveness, happiness peacefulness, love, pride, hopefulness, thankfulness, blame, anger, fear, abuse, sorrow, hopelessness, guilt, information, instructions). For the evaluation, we develop an ensemble architecture, where the base models correspond to three supervised deep learning models, namely Convolutional Neural Network (CNN), Gated Recurrent Unit (GRU) and Long Short Term Memory (LSTM).We obtain the highest test accuracy of 60.17%, and cross-validation accuracy of 60.32%.
true
[]
[]
Good Health and Well-Being
null
null
Authors gratefully acknowledge the support from the project titled 'Development of C-DAC Digital Forensic Centre with AI based Knowledge Support Tools', supported by MeitY, Govt. of India and Govt. of Bihar. The authors would also like to thank the linguists: Akash Bhagat, Suman Shekhar (IIT Patna) and Danish Armaan (IIEST Shibpur) for their valuable efforts in labelling the tweets.
2020
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
false
horacek-2013-justifying
https://aclanthology.org/R13-1040
Justifying Corpus-Based Choices in Referring Expression Generation
Most empirically-based approaches to NL generation elaborate on co-occurrences and frequencies observed over a corpus, which are then accommodated by learning algorithms. This method fails to capture generalities in generation subtasks, such as generating referring expressions, so that results obtained for some corpus cannot be transferred with confidence to similar environments or even to other domains. In order to obtain a more general basis for choices in referring expression generation, we formulate situational and task-specific properties, and we test to what degree they hold in a specific corpus. As a novelty, we incorporate features of the role of the underlying task, object identification, into these property specifications; these features are inherently domain-independent. Our method has the potential to enable the development of a repertoire of regularities that express generalities and differences across situations and domains, which supports the development of generic algorithms and also leads to a better understanding of underlying dependencies.
false
[]
[]
null
null
null
null
2013
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
carpenter-qu-1995-abstract
https://aclanthology.org/1995.iwpt-1.9
An Abstract Machine for Attribute-Value Logics
A direct abstract machine implementation of the core attribute-value logic operations is shown to decrease the number of operations and conserve the amount of storage required when compared to interpreters or indirect compilers. In this paper, we describe the fundamental data structures and compilation techniques that we have employed to develop a unification and constraint-resolution engine capable of performance rivaling that of directly compiled Prolog terms while greatly exceeding Prolog in flexibility, expressiveness and modularity. In this paper, we will discuss the core architecture of our machine. We begin with a survey of the data structures supporting the small set of attribute-value logic instructions. These instructions manipulate feature structures by means of features, equality, and typing, and manipulate the program state by search and sequencing operations. We further show how these core operations can be integrated with a broad range of standard parsing techniques. Feature structures improve upon Prolog terms by allowing data to be organized by feature rather than by position. This encourages modular program development through the use of sparse structural descriptions which can be logically conjoined into larger units and directly executed. Standard linguistic representations, even of relatively simple local syntactic and semantic structures, typically run to hundreds of substructures. The type discipline we impose organizes information in an object-oriented manner by the multiple inheritance of classes and their associated features and type value constraints. In practice, this allows the construction of large-scale grammars in a relatively short period of time. At run-time, eager copying and structure-sharing is replaced with lazy, incremental, and localized branch and write operations. In order to allow for applications with parallel search, incremental backtracking can be localized to disjunctive choice points within the description of a single structure, thus supporting the kind of conditional mutual consistency checks used in modern grammatical theories such as HPSG, GB, and LFG. Further attention is paid to the byte-coding of instructions and their efficient indexing and subsequent retrieval, all of which is keyed on type information. 1 Motivation Modern attribute-value constraint-based grammars share their primary operational structure with logic programs. In the past decade, Prolog compilers, such as Warren's Abstract Machine (A1t-Kaci 1990), have supplanted interpreters as the execution method of choice for logic pro grams. This is in large part due to a 50-fold speed up in execution times and a reduction by an order of magnitude in terms of space required. In addition to efficiency, compilation also brings the opportunity for static error detection. The vast majority of the time and space used by traditional unification-based grammar interpreters is spent on copying and unifying feature structures. For example, in a bottom-up chart parser, the standard process would be first to build a feature structure for a lexical entry, then to build the feature structures for the relevant rules, and then to unify the matching structures. The principal drawback to this, approach is that complete feature structures have to be constructed, even though unification may result in failure. In the case of f ailure, this can amount to a substantial amount of wasted time and space. By adopting an incremental compiled approach, a description is compiled into a set of abstract machine instructions. At run-time a description is evaluated incrementally, one instruction at a time. In this way, conflicts can be detected as early as possible, before any irrelevant structure has been introduced. In practice, this often means that the inconsistency of a rule with a category can often be detected very
false
[]
[]
null
null
null
null
1995
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
batra-etal-2021-building
https://aclanthology.org/2021.emnlp-main.53
Building Adaptive Acceptability Classifiers for Neural NLG
We propose a novel framework to train models to classify acceptability of responses generated by natural language generation (NLG) models, improving upon existing sentence transformation and model-based approaches. An NLG response is considered acceptable if it is both semantically correct and grammatical. We don't make use of any human references making the classifiers suitable for runtime deployment. Training data for the classifiers is obtained using a 2-stage approach of first generating synthetic data using a combination of existing and new model-based approaches followed by a novel validation framework to filter and sort the synthetic data into acceptable and unacceptable classes. Our 2stage approach adapts to a wide range of data representations and does not require additional data beyond what the NLG models are trained on. It is also independent of the underlying NLG model architecture, and is able to generate more realistic samples close to the distribution of the NLG model-generated responses. We present results on 5 datasets (WebNLG, Cleaned E2E, ViGGO, Alarm, and Weather) with varying data representations. We compare our framework with existing techniques that involve synthetic data generation using simple sentence transformations and/or modelbased techniques, and show that building acceptability classifiers using data that resembles the generation model outputs followed by a validation framework outperforms the existing techniques, achieving state-of-the-art results. We also show that our techniques can be used in few-shot settings using self-training.
false
[]
[]
null
null
null
null
2021
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
husain-etal-2011-clausal
https://aclanthology.org/I11-1143
Clausal parsing helps data-driven dependency parsing: Experiments with Hindi
This paper investigates clausal data-driven dependency parsing. We first motivate a clause as the minimal parsing unit by correlating inter-and intra-clausal relations with relation type, depth, arc length and non-projectivity. This insight leads to a two-stage formulation of parsing where intra-clausal relations are identified in the 1 st stage and inter-clausal relations are identified in the 2 nd stage. We compare two ways of implementing this idea, one based on hard constraints (similar to the one used in constraint-based parsing) and one based on soft constraints (using a kind of parser stacking). Our results show that the approach using hard constraints seems most promising and performs significantly better than single-stage parsing. Our best result gives significant increase in LAS and UAS, respectively, over the previous best result using single-stage parsing.
false
[]
[]
null
null
null
null
2011
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
ws-2001-adaptation
https://aclanthology.org/W01-0300
Adaptation in Dialog Systems
null
false
[]
[]
null
null
null
null
2001
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
maier-kallmeyer-2010-discontinuity
https://aclanthology.org/W10-4415
Discontinuity and Non-Projectivity: Using Mildly Context-Sensitive Formalisms for Data-Driven Parsing
We present a parser for probabilistic Linear Context-Free Rewriting Systems and use it for constituency and dependency treebank parsing. The choice of LCFRS, a formalism with an extended domain of locality, enables us to model discontinuous constituents and nonprojective dependencies in a straightforward way. The parsing results show that, firstly, our parser is efficient enough to be used for datadriven parsing and, secondly, its result quality for constituency parsing is comparable to the output quality of other state-of-the-art results, all while yielding structures that display discontinuous dependencies.
false
[]
[]
null
null
null
null
2010
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
kiela-bottou-2014-learning
https://aclanthology.org/D14-1005
Learning Image Embeddings using Convolutional Neural Networks for Improved Multi-Modal Semantics
We construct multi-modal concept representations by concatenating a skip-gram linguistic representation vector with a visual concept representation vector computed using the feature extraction layers of a deep convolutional neural network (CNN) trained on a large labeled object recognition dataset. This transfer learning approach brings a clear performance gain over features based on the traditional bag-of-visual-word approach. Experimental results are reported on the WordSim353 and MEN semantic relatedness evaluation tasks. We use visual features computed using either ImageNet or ESP Game images.
false
[]
[]
null
null
null
We would like to thank Maxime Oquab for providing the feature extraction code.
2014
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
kyle-etal-2013-native
https://aclanthology.org/W13-1731
Native Language Identification: A Key N-gram Category Approach
This study explores the efficacy of an approach to native language identification that utilizes grammatical, rhetorical, semantic, syntactic, and cohesive function categories comprised of key n-grams. The study found that a model based on these categories of key n-grams was able to successfully predict the L1 of essays written in English by L2 learners from 11 different L1 backgrounds with an accuracy of 59%. Preliminary findings concerning instances of crosslinguistic influence are discussed, along with evidence of language similarities based on patterns of language misclassification.
false
[]
[]
null
null
null
We thank ETS for compiling and providing the TOEFL11 corpus, and we also thank the organizers of the NLI Shared Task 2013.
2013
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
bingel-etal-2016-extracting
https://aclanthology.org/P16-1071
Extracting token-level signals of syntactic processing from fMRI - with an application to PoS induction
Neuro-imaging studies on reading different parts of speech (PoS) report somewhat mixed results, yet some of them indicate different activations with different PoS. This paper addresses the difficulty of using fMRI to discriminate between linguistic tokens in reading of running text because of low temporal resolution. We show that once we solve this problem, fMRI data contains a signal of PoS distinctions to the extent that it improves PoS induction with error reductions of more than 4%.
true
[]
[]
Good Health and Well-Being
null
null
This research was partially funded by the ERC Starting Grant LOWLANDS No. 313695, as well as by Trygfonden.
2016
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
false
long-etal-2017-xjnlp
https://aclanthology.org/S17-2178
XJNLP at SemEval-2017 Task 12: Clinical temporal information ex-traction with a Hybrid Model
Temporality is crucial in understanding the course of clinical events from a patient's electronic health records and temporal processing is becoming more and more important for improving access to content. SemEval 2017 Task 12 (Clinical TempEval) addressed this challenge using the THYME corpus, a corpus of clinical narratives annotated with a schema based on TimeML2 guidelines. We developed and evaluated approaches for: extraction of temporal expressions (TIMEX3) and EVENTs; EVENT attributes; document-time relations. Our approach is a hybrid model which is based on rule based methods, semi-supervised learning, and semantic features with addition of manually crafted rules.
true
[]
[]
Good Health and Well-Being
null
null
This work has been supported by "The Fundamental Theory and Applications of Big Data with
2017
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
false
evans-1996-legitimate
https://aclanthology.org/Y96-1033
Legitimate Termination of Nonlocal Features in HPSG
This paper reviews the treatment of wh-question facts offered by Lappin and Johnson 1996, and suggests that their account of certain island phenomena should be adapted by assuming that certain phrase structures license binding of inherited features. In Japanese, Lappin and Johnson's INHERILQUE feature appears to be dependent on INHERIQUE in order to terminate with a functional C head's TO-BINDIQUE. For certain languages, C's TO-BINDILQUE feature must be null if TO-BINDIQUE is null. In the spirit of Sag 1996 and Pollard and Yoo 1996, the facts can be handled by saying that TO-BINDILQUE is licensed on a wh-clause (wh-cl). As a wh-cl requires TO-BINDIQUE, the dependence of the less robust INHERILQUE on INHERIQUE is thus explained.
false
[]
[]
null
null
null
null
1996
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
suzuki-2012-classifying
https://aclanthology.org/W12-5307
Classifying Hotel Reviews into Criteria for Review Summarization
Recently, we can refer to user reviews in the shopping or hotel reservation sites. However, with the exponential growth of information of the Internet, it is becoming increasingly difficult for a user to read and understand all the materials from a large-scale reviews. In this paper, we propose a method for classifying hotel reviews written in Japanese into criteria, e.g., location and facilities. Our system firstly extracts words which represent criteria from hotel reviews. The extracted words are classified into 12 criteria classes. Then, for each hotel, each sentence of the guest reviews is classified into criterion classes by using two different types of Naive Bayes classifiers. We performed experiments for estimating accuracy of classifying hotel review into 12 criteria. The results showed the effectiveness of our method and indicated that it can be used for review summarization by guest's criteria.
false
[]
[]
null
null
null
The authors would like to thank the referees for their comments on the earlier version of this paper. This work was partially supported by The Telecommunications Advancement Foundation.
2012
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
inaba-1996-computational
https://aclanthology.org/Y96-1029
A Computational Expression of Initial Binary Feet and Surface Ternary Feet in Metrical Theory
Under the strict binary foot parsing (Kager 1993), stray elements may occur between bimoraic feet. The stray element may be associated to the preceding foot or following foot at surface level. Stray element adjunction is the mechanism for achieving surface exhaustivity. Each language has its own unique mecbanism of stray element adjunction in order to achieve surface exhaustivity. In Japanese loanwords, the strict binary initial foot parsing creates stray moras. Inaba's (1996) phonetic experiment shows that the word-medial stray moras associate to preceding feet, and provides evidence for the initial unaccented mora as extrametrical. Since the theoretical points I advance are deeply embedded in other languages, I present a set of possible parameters. Based on the set of parameters, I create a computer program which derives the surface foot structures of input loanwords in Japanese, Fijian, and Ponapean.
false
[]
[]
null
null
null
null
1996
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
silva-etal-2010-top
http://www.lrec-conf.org/proceedings/lrec2010/pdf/136_Paper.pdf
Top-Performing Robust Constituency Parsing of Portuguese: Freely Available in as Many Ways as you Can Get it
In this paper we present LX-Parser, a probabilistic, robust constituency parser for Portuguese. This parser achieves ca. 88% f-score in the labeled bracketing task, thus reaching a state-of-the-art performance score that is in line with those that are currently obtained by top-ranking parsers for English, the most studied natural language. To the best of our knowledge, LX-Parser is the first state-of-the-art, robust constituency parser for Portuguese that is made freely available. This parser is being distributed in a variety of ways, each suited for a different type of usage. More specifically, LX-Parser is being made available (i) as a downloadable, stand-alone parsing tool that can be run locally by its users; (ii) as a Web service that exposes an interface that can be invoked remotely and transparently by client applications; and finally (iii) as an on-line parsing service, aimed at human users, that can be accessed through any common Web browser.
false
[]
[]
null
null
null
null
2010
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
perez-beltrachini-lapata-2021-models
https://aclanthology.org/2021.emnlp-main.742
Models and Datasets for Cross-Lingual Summarisation
We present a cross-lingual summarisation corpus with long documents in a source language associated with multi-sentence summaries in a target language. The corpus covers twelve language pairs and directions for four European languages, namely Czech, English, French and German, and the methodology for its creation can be applied to several other languages. We derive cross-lingual document-summary instances from Wikipedia by combining lead paragraphs and articles' bodies from language aligned Wikipedia titles. We analyse the proposed cross-lingual summarisation task with automatic metrics and validate it with a human study. To illustrate the utility of our dataset we report experiments with multilingual pretrained models in supervised, zero-and fewshot, and out-of-domain scenarios.
false
[]
[]
null
null
null
We thank the anonymous reviewers for their feedback. We also thank Yumo Xu for useful discussions about the models. We are extremely grateful to our bilingual annotators and to Voxeurop SCE publishers. We gratefully acknowledge the support of the European Research Council (award number 681760).
2021
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
komninos-manandhar-2016-dependency
https://aclanthology.org/N16-1175
Dependency Based Embeddings for Sentence Classification Tasks
We compare different word embeddings from a standard window based skipgram model, a skipgram model trained using dependency context features and a novel skipgram variant that utilizes additional information from dependency graphs. We explore the effectiveness of the different types of word embeddings for word similarity and sentence classification tasks. We consider three common sentence classification tasks: question type classification on the TREC dataset, binary sentiment classification on Stanford's Sentiment Treebank and semantic relation classification on the SemEval 2010 dataset. For each task we use three different classification methods: a Support Vector Machine, a Convolutional Neural Network and a Long Short Term Memory Network. Our experiments show that dependency based embeddings outperform standard window based embeddings in most of the settings, while using dependency context embeddings as additional features improves performance in all tasks regardless of the classification method. Our embeddings and code are available at https://www.cs.york.ac.uk/nlp/ extvec
false
[]
[]
null
null
null
Alexandros Komninos was supported by EP-SRC via an Engineering Doctorate in LSCITS. Suresh Manandhar was supported by EPSRC grant EP/I037512/1, A Unified Model of Compositional & Distributional Semantics: Theory and Application.
2016
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
merlo-van-der-plas-2009-abstraction
https://aclanthology.org/P09-1033
Abstraction and Generalisation in Semantic Role Labels: PropBank, VerbNet or both?
Semantic role labels are the representation of the grammatically relevant aspects of a sentence meaning. Capturing the nature and the number of semantic roles in a sentence is therefore fundamental to correctly describing the interface between grammar and meaning. In this paper, we compare two annotation schemes, Prop-Bank and VerbNet, in a task-independent, general way, analysing how well they fare in capturing the linguistic generalisations that are known to hold for semantic role labels, and consequently how well they grammaticalise aspects of meaning. We show that VerbNet is more verb-specific and better able to generalise to new semantic role instances, while PropBank better captures some of the structural constraints among roles. We conclude that these two resources should be used together, as they are complementary.
false
[]
[]
null
null
null
We thank James Henderson and Ivan Titov for useful comments. The research leading to these results has received partial funding from the EU FP7 programme (FP7/2007-2013) under grant agreement number 216594 (CLASSIC project: www.classic-project.org).
2009
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
green-2011-effects
https://aclanthology.org/P11-3013
Effects of Noun Phrase Bracketing in Dependency Parsing and Machine Translation
Flat noun phrase structure was, up until recently, the standard in annotation for the Penn Treebanks. With the recent addition of internal noun phrase annotation, dependency parsing and applications down the NLP pipeline are likely affected. Some machine translation systems, such as TectoMT, use deep syntax as a language transfer layer. It is proposed that changes to the noun phrase dependency parse will have a cascading effect down the NLP pipeline and in the end, improve machine translation output, even with a reduction in parser accuracy that the noun phrase structure might cause. This paper examines this noun phrase structure's effect on dependency parsing, in English, with a maximum spanning tree parser and shows a 2.43%, 0.23 Bleu score, improvement for English to Czech machine translation.
false
[]
[]
null
null
null
This research has received funding from the European Commissions 7th Framework Program (FP7) under grant agreement n • 238405 (CLARA), and from grant MSM 0021620838. I would like to thank ZdeněkŽabokrtský for his guidance in this research and also the anonymous reviewers for their comments.
2011
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
fei-etal-2020-mimic
https://aclanthology.org/2020.findings-emnlp.18
Mimic and Conquer: Heterogeneous Tree Structure Distillation for Syntactic NLP
Syntax has been shown useful for various NLP tasks, while existing work mostly encodes singleton syntactic tree using one hierarchical neural network. In this paper, we investigate a simple and effective method, Knowledge Distillation, to integrate heterogeneous structure knowledge into a unified sequential LSTM encoder. Experimental results on four typical syntax-dependent tasks show that our method outperforms tree encoders by effectively integrating rich heterogeneous structure syntax, meanwhile reducing error propagation, and also outperforms ensemble methods, in terms of both the efficiency and accuracy.
false
[]
[]
null
null
null
This work is supported by the National Natural Science Foundation of China (No. 61772378, No. 61702121)
2020
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
mcintosh-2009-canadian
https://aclanthology.org/2009.mtsummit-government.4
Canadian Job Bank Automated Translation System
§ Job Bank (www.jobbank.gc.ca) is a free job-posting service provided by the Federal Government to all Canadians. Employers have the option to create a profile; upon approval, they can then post job offers. § Job seekers are able to access these positions in two ways. Standard Job Search and Job Matching § Several additional tools are available to assist the Job Seeker with their job search; such as resume builder, job alert, job search tips, and career navigator Job Bank for Employers § Employers can post job advertisements 24 hours a day, 7 days a week using the "Job Bank for Employers" Web site § Job offers received by fax, e-mail, Internet and telephone must be published simultaneously in both French and English within 24 business hours § 70,356,222 Job Bank Web site visits in 2008-2009 § 1,138,233 Spelling and Grammar Checker § Customized entries are added on a weekly basis to a single file § Has been integrated into the JBFE interface Oracle Database § Archives offers and their post-edited equivalents in a database § Automatically posts offers that are identical (100% match) along with their translation § Of all offers posted to the JB site, 45% are reproduced by the database
true
[]
[]
Decent Work and Economic Growth
null
null
null
2009
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
sornlertlamvanich-etal-2000-automatic
https://aclanthology.org/C00-2116
Automatic Corpus-Based Thai Word Extraction with the C4.5 Learning Algorithm
Word" is difficult to define in the languages that do not exhibit explicit word boundary, such as Thai. Traditional methods on defining words for this kind of languages have to depend on human judgement which bases on unclear criteria or procedures, and have several limitations. This paper proposes an algorithm for word extraction from Thai texts without borrowing a hand from word segmentation. We employ the c4.5 learning algorithm for this task. Several attributes such as string length, frequency, mutual information and entropy are chosen for word/non-word determination. Our experiment yields high precision results about 85% in both training and test corpus.
false
[]
[]
null
null
null
Special thanks to Assistant Professor Mikio Yamamoto for providing the useful program to extract all substrings from the corpora in linear time.
2000
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
song-etal-2020-multi
https://aclanthology.org/2020.emnlp-main.546
Multi-Stage Pre-training for Automated Chinese Essay Scoring
This paper proposes a pre-training based automated Chinese essay scoring method. The method involves three components: weakly supervised pre-training, supervised crossprompt fine-tuning and supervised targetprompt fine-tuning. An essay scorer is first pretrained on a large essay dataset covering diverse topics and with coarse ratings, i.e., good and poor, which are used as a kind of weak supervision. The pre-trained essay scorer would be further fine-tuned on previously rated essays from existing prompts, which have the same score range with the target prompt and provide extra supervision. At last, the scorer is fine-tuned on the target-prompt training data. The evaluation on four prompts shows that this method can improve a state-of-the-art neural essay scorer in terms of effectiveness and domain adaptation ability, while in-depth analysis also reveals its limitations.
true
[]
[]
Quality Education
null
null
This work is supported by the National Natural Science Foundation of China (Nos. 61876113, 61876112), Beijing Natural Science Foundation (No. 4192017) and Capital Building for Sci-Tech Innovation-Fundamental Scientific Research Funds. Lizhen Liu is the corresponding author.
2020
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
vu-etal-2022-domain
https://aclanthology.org/2022.findings-acl.49
Domain Generalisation of NMT: Fusing Adapters with Leave-One-Domain-Out Training
Generalising to unseen domains is underexplored and remains a challenge in neural machine translation. Inspired by recent research in parameter-efficient transfer learning from pretrained models, this paper proposes a fusionbased generalisation method that learns to combine domain-specific parameters. We propose a leave-one-domain-out training strategy to avoid information leaking to address the challenge of not knowing the test domain during training time. Empirical results on three language pairs show that our proposed fusion method outperforms other baselines up to +0.8 BLEU score on average.
false
[]
[]
null
null
null
This research is supported by an eBay Research Award and the ARC Future Fellowship FT190100039. This work is partly sponsored by the Air Force Research Laboratory and DARPA under agreement number FA8750-19-2-0501. The U.S. Government is authorized to reproduce and distribute reprints for Governmental purposes notwithstanding any copyright notation thereon. The authors are grateful to the anonymous reviewers for their helpful comments to improve the manuscript.
2022
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
bisk-hockenmaier-2013-hdp
https://aclanthology.org/Q13-1007
An HDP Model for Inducing Combinatory Categorial Grammars
We introduce a novel nonparametric Bayesian model for the induction of Combinatory Categorial Grammars from POS-tagged text. It achieves state of the art performance on a number of languages, and induces linguistically plausible lexicons.
false
[]
[]
null
null
null
This work is supported by NSF CAREER award 1053856 (Bayesian Models for Lexicalized Grammars).
2013
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
jauhiainen-etal-2017-evaluating
https://aclanthology.org/W17-1212
Evaluating HeLI with Non-Linear Mappings
In this paper we describe the non-linear mappings we used with the Helsinki language identification method, HeLI, in the 4 th edition of the Discriminating between Similar Languages (DSL) shared task, which was organized as part of the Var-Dial 2017 workshop. Our SUKI team participated in the closed track together with 10 other teams. Our system reached the 7 th position in the track. We describe the HeLI method and the non-linear mappings in mathematical notation. The HeLI method uses a probabilistic model with character n-grams and word-based backoff. We also describe our trials using the non-linear mappings instead of relative frequencies and we present statistics about the back-off function of the HeLI method.
false
[]
[]
null
null
null
We would like to thank Kimmo Koskenniemi for many valuable discussions and comments. This research was made possible by funding from the Kone Foundation Language Programme.
2017
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
zhang-etal-2020-fast
https://aclanthology.org/2020.wmt-1.62
Fast Interleaved Bidirectional Sequence Generation
Independence assumptions during sequence generation can speed up inference, but parallel generation of highly interdependent tokens comes at a cost in quality. Instead of assuming independence between neighbouring tokens (semi-autoregressive decoding, SA), we take inspiration from bidirectional sequence generation and introduce a decoder that generates target words from the left-to-right and right-toleft directions simultaneously. We show that we can easily convert a standard architecture for unidirectional decoding into a bidirectional decoder by simply interleaving the two directions and adapting the word positions and selfattention masks. Our interleaved bidirectional decoder (IBDecoder) retains the model simplicity and training efficiency of the standard Transformer, and on five machine translation tasks and two document summarization tasks, achieves a decoding speedup of ∼2× compared to autoregressive decoding with comparable quality. Notably, it outperforms left-toright SA because the independence assumptions in IBDecoder are more felicitous. To achieve even higher speedups, we explore hybrid models where we either simultaneously predict multiple neighbouring tokens per direction, or perform multi-directional decoding by partitioning the target sequence. These methods achieve speedups to 4×-11× across different tasks at the cost of <1 BLEU or <0.5 ROUGE (on average). 1
false
[]
[]
null
null
null
This work was performed using resources provided by the Cambridge Service for Data Driven Discovery (CSD3) operated by the University of Cambridge Research Computing Service (http: //www.csd3.cam.ac.uk/), provided by Dell EMC and Intel using Tier-2 funding from the Engineering and Physical Sciences Research Council (capital grant EP/P020259/1), and DiRAC funding from the Science and Technology Facilities Council (www.dirac.ac.uk). Ivan Titov acknowledges support of the European Research Council (ERC Starting grant 678254) and the Dutch National Science Foundation (NWO VIDI 639.022.518). Rico Sennrich acknowledges support of the Swiss National Science Foundation (MUTAMUR; no. 176727).
2020
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
grefenstette-2015-inriasac
https://aclanthology.org/S15-2152
INRIASAC: Simple Hypernym Extraction Methods
For information retrieval, it is useful to classify documents using a hierarchy of terms from a domain. One problem is that, for many domains, hierarchies of terms are not available. The task 17 of SemEval 2015 addresses the problem of structuring a set of terms from a given domain into a taxonomy without manual intervention. Here we present some simple taxonomy structuring techniques, such as term overlap and document and sentence cooccurrence in large quantities of text (English Wikipedia) to produce hypernym pairs for the eight domain lists supplied by the task organizers. Our submission ranked first in this 2015 benchmark, which suggests that overly complicated methods might need to be adapted to individual domains. We describe our generic techniques and present an initial evaluation of results.
false
[]
[]
null
null
null
This research is partially funded by a research grant from INRIA, and the Paris-Saclay Institut de la Société Numérique funded by the IDEX Paris-Saclay, ANR-11-IDEX-0003-02.
2015
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
seyffarth-kallmeyer-2020-corpus
https://aclanthology.org/2020.coling-main.357
Corpus-based Identification of Verbs Participating in Verb Alternations Using Classification and Manual Annotation
English verb alternations allow participating verbs to appear in a set of syntactically different constructions whose associated semantic frames are systematically related. We use ENCOW and VerbNet data to train classifiers to predict the instrument subject alternation and the causativeinchoative alternation, relying on count-based and vector-based features as well as perplexitybased language model features, which are intended to reflect each alternation's felicity by simulating it. Beyond the prediction task, we use the classifier results as a source for a manual annotation step in order to identify new, unseen instances of each alternation. This is possible because existing alternation datasets contain positive, but no negative instances and are not comprehensive. Over several sequences of classification-annotation steps, we iteratively extend our sets of alternating verbs. Our hybrid approach to the identification of new alternating verbs reduces the required annotation effort by only presenting annotators with the highest-scoring candidates from the previous classification. Due to the success of semi-supervised and unsupervised features, our approach can easily be transferred to further alternations.
false
[]
[]
null
null
null
The work presented in this paper was financed by the Deutsche Forschungsgemeinschaft (DFG) within the CRC 991 "The Structure of Representations in Language, Cognition, and Science" and the individual DFG project "Unsupervised Frame Induction (FInd)". We wish to thank the anonymous reviewers for their constructive feedback and helpful comments.
2020
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
shen-etal-2021-sciconceptminer
https://aclanthology.org/2021.acl-demo.6
SciConceptMiner: A system for large-scale scientific concept discovery
Scientific knowledge is evolving at an unprecedented rate of speed, with new concepts constantly being introduced from millions of academic articles published every month. In this paper, we introduce a self-supervised end-toend system, SciConceptMiner, for the automatic capture of emerging scientific concepts from both independent knowledge sources (semi-structured data) and academic publications (unstructured documents). First, we adopt a BERT-based sequence labeling model to predict candidate concept phrases with selfsupervision data. Then, we incorporate rich Web content for synonym detection and concept selection via a web search API. This two-stage approach achieves highly accurate (94.7%) concept identification with more than 740K scientific concepts. These concepts are deployed in the Microsoft Academic 1 production system and are the backbone for its semantic search capability.
true
[]
[]
Industry, Innovation and Infrastructure
null
null
13 We split the sampled data of each category to 3 groups with 100 each and they are evaluated by 3 judges. We report the average of positive label ratios.Source Age Avg Y 5% Y 50% Y 95% Y Wiki
2021
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
niu-2017-chinese
https://aclanthology.org/W17-6519
Chinese Descriptive and Resultative V-de Constructions. A Dependency-based Analysis
This contribution presents a dependency grammar (DG) analysis of the so-called descriptive and resultative V-de constructions in Mandarin Chinese (VDCs); it focuses, in particular, on the dependency analysis of the noun phrase that intervenes between the two predicates in a VDC. Two methods, namely chunking data collected from informants and two diagnostics specific to Chinese, i.e. bǎ and bèi sentence formation, were used. They were employed to discern which analysis should be preferred, i.e. the ternary-branching analysis, in which the intervening NP (NP2) is a dependent of the first predicate (P1), or the small-clause analysis, in which NP2 depends on the second predicate (P2). The results obtained suggest a flexible structural analysis for VDCs in the form of "NP1+P1-de+NP2+P2". The difference in structural assignment is attributed to a semantic property of NP2 and the semantic relations it forms with adjacent predicates.
false
[]
[]
null
null
null
The research presented in this article was funded by the Ministry of Education of the People's Republic of China, Grant # 15YJA74001.
2017
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
lichouri-abbas-2020-speechtrans
https://aclanthology.org/2020.smm4h-1.19
SpeechTrans@SMM4H'20: Impact of Preprocessing and N-grams on Automatic Classification of Tweets That Mention Medications
This paper describes our system developed for automatically classifying tweets that mention medications. We used the Decision Tree classifier for this task. We have shown that using some elementary preprocessing steps and TF-IDF n-grams led to acceptable classifier performance. Indeed, the F1-score recorded was 74.58% in the development phase and 63.70% in the test phase.
true
[]
[]
Good Health and Well-Being
null
null
null
2020
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
false
alexin-etal-2003-annotated
https://aclanthology.org/E03-1012
Annotated Hungarian National Corpus
The beginning of the work dates back to 1998 when the authors started a research project on the application of ILP (Inductive Logic Programming) learning methods for part-of-speech tagging. This research was done within the framework of a European ESPRIT project (LTR 20237, "lLP2") , where first studies were based on the so-called TELRI corpus (Erjavec et al., 1998) . Since the corpus annotation had several deficiencies and its size proved to be small for further research, a national project has been organized with the main goal to create a suitably large training corpus for machine learning applications, primarily for POS (Part-of-speech) tagging.
false
[]
[]
null
null
null
The project was partially supported by the Hungarian Ministry of Education (grant: IKTA 27/2000). The authors also would like to thank researchers of the Research Institute for Linguistics at the Hungarian Academy of Sciences for their kind help and advice.
2003
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
meyers-etal-2004-cross
http://www.lrec-conf.org/proceedings/lrec2004/pdf/397.pdf
The Cross-Breeding of Dictionaries
Especially for English, the number of hand-coded electronic resources available to the Natural Language Processing Community keeps growing: annotated corpora, treebanks, lexicons, wordnets, etc. Unfortunately, initial funding for such projects is much easier to obtain than the additional funding needed to enlarge or improve upon such resources. Thus once one proves the usefulness of a resource, it is difficult to make that resource reach its full potential. We discuss techniques for combining dictionary resources and producing others by semi-automatic means. The resources we created using these techniques have become an integral part of our work on NomBank, a project with the goal of annotating noun arguments in the Penn Treebank II corpus (PTB).
false
[]
[]
null
null
null
Nombank is supported under Grant N66001-001-1-8917 from the Space and Naval Warfare Systems Center San Diego. This paper does not necessarily reflect the position or the policy of the U.S. Government.
2004
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
janarthanam-lemon-2010-adaptive
https://aclanthology.org/W10-4324
Adaptive Referring Expression Generation in Spoken Dialogue Systems: Evaluation with Real Users
We present new results from a real-user evaluation of a data-driven approach to learning user-adaptive referring expression generation (REG) policies for spoken dialogue systems. Referring expressions can be difficult to understand in technical domains where users may not know the technical 'jargon' names of the domain entities. In such cases, dialogue systems must be able to model the user's (lexical) domain knowledge and use appropriate referring expressions. We present a reinforcement learning (RL) framework in which the system learns REG policies which can adapt to unknown users online. For real users of such a system, we show that in comparison to an adaptive hand-coded baseline policy, the learned policy performs significantly better, with a 20.8% average increase in adaptation accuracy, 12.6% decrease in time taken, and a 15.1% increase in task completion rate. The learned policy also has a significantly better subjective rating from users. This is because the learned policies adapt online to changing evidence about the user's domain expertise. We also discuss the issue of evaluation in simulation versus evaluation with real users.
false
[]
[]
null
null
null
The research leading to these results has received funding from the European Community's Seventh Framework Programme (FP7/2007-2013) under grant agreement no. 216594 (CLASSiC project www.classic-project.org) and from the EPSRC, project no. EP/G069840/1.
2010
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
sitbon-bellot-2006-tools
http://www.lrec-conf.org/proceedings/lrec2006/pdf/410_pdf.pdf
Tools and methods for objective or contextual evaluation of topic segmentation
In this paper we discuss the way of evaluating topic segmentation, from mathematical measures on variously constructed reference corpus to contextual evaluation depending on different topic segmentation usages. We present an overview of the different ways of building reference corpora and of mathematically evaluating segmentation methods, and then we focus on three tasks which may involve a topic segmentation : text extraction, information retrieval and document presentation. We have developped two graphical interfaces, one for an intrinsec comparison, and the other one dedicated to an evaluation in an information retrieval context. These tools will be very soon distributed under GPL licences on the Technolangue project web page.
false
[]
[]
null
null
null
null
2006
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
foret-nir-2002-rigid
https://aclanthology.org/C02-1111
Rigid Lambek Grammars Are Not Learnable from Strings
null
false
[]
[]
null
null
null
null
2002
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
defauw-etal-2019-collecting
https://aclanthology.org/W19-6733
Collecting domain specific data for MT: an evaluation of the ParaCrawlpipeline
This paper investigates the effectiveness of the ParaCrawl pipeline for collecting domain-specific training data for machine translation. We follow the different steps of the pipeline (document alignment, sentence alignment, cleaning) and add a topic-filtering component. Experiments are performed on the legal domain for the English to French and English to Irish language pairs. We evaluate the pipeline at both intrinsic (alignment quality) and extrinsic (MT performance) levels. Our results show that with this pipeline we obtain highquality alignments and significant improvements in MT quality.
false
[]
[]
null
null
null
This work was performed in the framework of the SMART 2015/1091 project ("Tools and resources for CEF automated translation"), funded by the CEF Telecom programme (Connecting Europe Facility).
2019
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
jonnalagadda-etal-2013-evaluating
https://aclanthology.org/W13-0404
Evaluating the Use of Empirically Constructed Lexical Resources for Named Entity Recognition
One of the most time-consuming tasks faced by a Natural Language Processing (NLP) researcher or practitioner trying to adapt a machine-learning-based NER system to a different domain is the creation, compilation, and customization of the needed lexicons. Lexical resources, such as lexicons of concept classes are considered necessary to improve the performance of NER. It is typical for medical informatics researchers to implement modularized systems that cannot be generalized (Stanfill et al. 2010) . As the work of constructing or customizing lexical resources needed for these highly specific systems is human-intensive, automatic generation is a desirable alternative. It might be possible that empirically created lexical resources might incorporate domain knowledge into a machine-learning NER engine and increase its accuracy. Although many machine learning-based NER techniques require annotated data, semi-supervised and unsupervised techniques for NER have been long been explored due to their value in domain robustness and minimizing labor costs. Some attempts at automatic knowledgebase construction included automatic thesaurus discovery efforts (Grefenstette 1994) , which sought to build lists of similar words without human intervention to aid in query expansion or automatic dictionary construction (Riloff 1996) . More recently, the use of empirically derived semantics for NER is used by Finkel and Manning (Finkel and Manning 2009a) , Turian et al. (Turian et al. 2010) , and ). Finkel's NER tool uses clusters of terms built apriori from the British National corpus (Aston and Burnard 1998) and English gigaword corpus (Graff et al. 2003) for extracting concepts from newswire text and PubMed abstracts for extracting gene mentions from biomedical literature. Turian et al. (Turian et al. 2010 ) also showed that statistically created word clusters (P. F. Brown et al. 1992; Clark 2000) could be used to improve named entity recognition. However, only a single feature (cluster membership) can be derived from the clusters. Semantic vector representations of terms had not been previously used for NER or sequential tagging classification tasks before (Turian et al. 2010) . Although use empirically derived vector representation for extracting concepts defined in the GENIA (Kim, Ohta, and Tsujii 2008) ontology from biomedical literature using rule-based methods, it was not clear whether such methods could be ported to extract other concepts or incrementally improve the performance of an existing system . This work not only demonstrates how such vector representation could improve state-of-the-art NER, but also that they are more useful than statistical clustering in this context.
false
[]
[]
null
null
null
This work was possible because of funding from possible sources: NLM HHSN276201000031C (PI: Gonzalez), NCRR 3UL1RR024148, NCRR 1RC1RR028254, NSF 0964613 and the Brown Foundation (PI: Bernstam), NSF ABI:0845523, NLM R01LM009959A1 (PI: Liu) and NLM 1K99LM011389 (PI: Jonnalagadda). We also thank the developers of BANNER (http://banner.sourceforge.net/), MALLET (http://mallet.cs.umass.edu/) and Semantic Vectors (http://code.google.com/p/semanticvectors/) for the software packages and the organizers of the i2b2/VA 2010 NLP challenge for sharing the corpus.
2013
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
dai-etal-2020-multi
https://aclanthology.org/2020.emnlp-main.565
A Multi-Task Incremental Learning Framework with Category Name Embedding for Aspect-Category Sentiment Analysis
(T)ACSA tasks, including aspect-category sentiment analysis (ACSA) and targeted aspectcategory sentiment analysis (TACSA), aims at identifying sentiment polarity on predefined categories. Incremental learning on new categories is necessary for (T)ACSA real applications. Though current multi-task learning models achieve good performance in (T)ACSA tasks, they suffer from catastrophic forgetting problems in (T)ACSA incremental learning tasks. In this paper, to make multi-task learning feasible for incremental learning, we proposed Category Name Embedding network (CNE-net). We set both encoder and decoder shared among all categories to weaken the catastrophic forgetting problem. Besides the origin input sentence, we applied another input feature, i.e., category name, for task discrimination. Our model achieved state-of-theart on two (T)ACSA benchmark datasets. Furthermore, we proposed a dataset for (T)ACSA incremental learning and achieved the best performance compared with other strong baselines.
false
[]
[]
null
null
null
null
2020
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
nn-1977-finite-string-volume-14-number-5
https://aclanthology.org/J77-3003
The FINITE STRING, Volume 14, Number 5
AMERICAN JOURNAL OF COMPUTATIONAL LINGUISTICS is published by the Association for Computational Linguistics.
false
[]
[]
null
null
null
null
1977
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
tyers-etal-2012-flexible
https://aclanthology.org/2012.eamt-1.54
Flexible finite-state lexical selection for rule-based machine translation
In this paper we describe a module (rule formalism, rule compiler and rule processor) designed to provide flexible support for lexical selection in rule-based machine translation. The motivation and implementation for the system is outlined and an efficient algorithm to compute the best coverage of lexical-selection rules over an ambiguous input sentence is described. We provide a demonstration of the module by learning rules for it on a typical training corpus and evaluating against other possible lexicalselection strategies. The inclusion of the module, along with rules learnt from the parallel corpus provides a small, but consistent and statistically-significant improvement over either using the highest-scoring translation according to a target-language model or using the most frequent aligned translation in the parallel corpus which is also found in the system's bilingual dictionaries.
false
[]
[]
null
null
null
We are thankful for the support of the Spanish Ministry of Science and Innovation through project TIN2009-14009-C02-01, and the Universitat d'Alacant through project GRE11-20. We also thank Sergio Ortiz Rojas for his constructive comments and ideas on the development of the system, and the anonymous reviewers for comments on the manuscript.
2012
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
pucher-2007-wordnet
https://aclanthology.org/P07-2033
WordNet-based Semantic Relatedness Measures in Automatic Speech Recognition for Meetings
This paper presents the application of WordNet-based semantic relatedness measures to Automatic Speech Recognition (ASR) in multi-party meetings. Different word-utterance context relatedness measures and utterance-coherence measures are defined and applied to the rescoring of Nbest lists. No significant improvements in terms of Word-Error-Rate (WER) are achieved compared to a large word-based ngram baseline model. We discuss our results and the relation to other work that achieved an improvement with such models for simpler tasks.
false
[]
[]
null
null
null
This work was supported by the European Union 6th FP IST Integrated Project AMI (Augmented Multiparty Interaction, and by Kapsch Carrier-Com AG and Mobilkom Austria AG together with the Austrian competence centre programme Kplus.
2007
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
calzolari-etal-2004-enabler
http://www.lrec-conf.org/proceedings/lrec2004/pdf/545.pdf
ENABLER Thematic Network of National Projects: Technical, Strategic and Political Issues of LRs
In this paper we present general strategies concerning Language Resources (LRs)-Written, Spoken and, recently, Multimodal-as developed within the ENABLER Thematic Network. LRs are a central component of the so-called "linguistic infrastructure" (the other key element being Evaluation), necessary for the development of any Human Language Technology (HLT) application. They play a critical role, as horizontal technology, in different emerging areas of FP6, and have been recognized as a priority within a number of national projects around Europe and worldwide. The availability of LRs is also a "sensitive" issue, touching directly the sphere of linguistic and cultural identity, but also with economical, societal and political implications. This is going to be even more true in the new Europe with 25 languages on a par.
true
[]
[]
Industry, Innovation and Infrastructure
Partnership for the goals
null
null
2004
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
true
escoter-etal-2017-grouping
https://aclanthology.org/E17-1103
Grouping business news stories based on salience of named entities
In news aggregation systems focused on broad news domains, certain stories may appear in multiple articles. Depending on the relative importance of the story, the number of versions can reach dozens or hundreds within a day. The text in these versions may be nearly identical or quite different. Linking multiple versions of a story into a single group brings several important benefits to the end-user-reducing the cognitive load on the reader, as well as signaling the relative importance of the story. We present a grouping algorithm, and explore several vector-based representations of input documents: from a baseline using keywords, to a method using salience-a measure of importance of named entities in the text. We demonstrate that features beyond keywords yield substantial improvements, verified on a manually-annotated corpus of business news stories.
false
[]
[]
null
null
null
null
2017
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
sartorio-etal-2013-transition
https://aclanthology.org/P13-1014
A Transition-Based Dependency Parser Using a Dynamic Parsing Strategy
We present a novel transition-based, greedy dependency parser which implements a flexible mix of bottom-up and top-down strategies. The new strategy allows the parser to postpone difficult decisions until the relevant information becomes available. The novel parser has a ∼12% error reduction in unlabeled attachment score over an arc-eager parser, with a slowdown factor of 2.8.
false
[]
[]
null
null
null
We wish to thank Liang Huang and Marco Kuhlmann for discussion related to the ideas reported in this paper, and the anonymous reviewers for their useful suggestions. The second author has been partially supported by MIUR under project PRIN No. 2010LYA9RH 006.
2013
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
bagga-2000-analyzing
https://aclanthology.org/W00-0106
Analyzing the Reading Comprehension Task
In this paper we describe a method for analyzing the reading comprehension task. First, we describe a method of classifying facts (information) into categories or levels; where each level signifies a different degree of difficulty of extracting a fact from a piece of text containing it. We then proceed to show how one can use this model the analyze the complexity of the reading comprehension task. Finally, we analyze five different reading comprehension tasks and present results from this analysis.
false
[]
[]
null
null
null
null
2000
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
chen-etal-2020-distilling
https://aclanthology.org/2020.acl-main.705
Distilling Knowledge Learned in BERT for Text Generation
Large-scale pre-trained language model such as BERT has achieved great success in language understanding tasks. However, it remains an open question how to utilize BERT for language generation. In this paper, we present a novel approach, Conditional Masked Language Modeling (C-MLM), to enable the finetuning of BERT on target generation tasks. The finetuned BERT (teacher) is exploited as extra supervision to improve conventional Seq2Seq models (student) for better text generation performance. By leveraging BERT's idiosyncratic bidirectional nature, distilling knowledge learned in BERT can encourage auto-regressive Seq2Seq models to plan ahead, imposing global sequence-level supervision for coherent text generation. Experiments show that the proposed approach significantly outperforms strong Transformer baselines on multiple language generation tasks such as machine translation and text summarization. Our proposed model also achieves new state of the art on IWSLT German-English and English-Vietnamese MT datasets. 1
false
[]
[]
null
null
null
null
2020
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
castillo-2010-machine
https://aclanthology.org/W10-1609
A Machine Learning Approach for Recognizing Textual Entailment in Spanish
This paper presents a system that uses machine learning algorithms for the task of recognizing textual entailment in Spanish language. The datasets used include SPARTE Corpus and a translated version to Spanish of RTE3, RTE4 and RTE5 datasets. The features chosen quantify lexical, syntactic and semantic level matching between text and hypothesis sentences. We analyze how the different sizes of datasets and classifiers could impact on the final overall performance of the RTE classification of two-way task in Spanish. The RTE system yields 60.83% of accuracy and a competitive result of 66.50% of accuracy is reported by train and test set taken from SPARTE Corpus with 70% split.
false
[]
[]
null
null
null
null
2010
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
wich-etal-2020-impact
https://aclanthology.org/2020.alw-1.7
Impact of Politically Biased Data on Hate Speech Classification
One challenge that social media platforms are facing nowadays is hate speech. Hence, automatic hate speech detection has been increasingly researched in recent years-in particular with the rise of deep learning. A problem of these models is their vulnerability to undesirable bias in training data. We investigate the impact of political bias on hate speech classification by constructing three politicallybiased data sets (left-wing, right-wing, politically neutral) and compare the performance of classifiers trained on them. We show that (1) political bias negatively impairs the performance of hate speech classifiers and (2) an explainable machine learning model can help to visualize such bias within the training data. The results show that political bias in training data has an impact on hate speech classification and can become a serious issue.
true
[]
[]
Peace, Justice and Strong Institutions
null
null
This paper is based on a joined work in the context of Jan Bauer's master's thesis (Bauer, 2020) . This research has been partially funded by a scholarship from the Hanns Seidel Foundation financed by the German Federal Ministry of Education and Research.
2020
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
true
false
temperley-2010-invited
https://aclanthology.org/N10-1114
Invited Talk: Music, Language, and Computational Modeling: Lessons from the Key-Finding Problem
Recent research in computational music research, including my own, has been greatly influenced by methods in computational linguistics. But I believe the influence could also go the other way: Music may offer some interesting lessons for language research, particularly with regard to the modeling of cognition.
false
[]
[]
null
null
null
null
2010
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
singh-etal-2011-large
https://aclanthology.org/P11-1080
Large-Scale Cross-Document Coreference Using Distributed Inference and Hierarchical Models
Cross-document coreference, the task of grouping all the mentions of each entity in a document collection, arises in information extraction and automated knowledge base construction. For large collections, it is clearly impractical to consider all possible groupings of mentions into distinct entities. To solve the problem we propose two ideas: (a) a distributed inference technique that uses parallelism to enable large scale processing, and (b) a hierarchical model of coreference that represents uncertainty over multiple granularities of entities to facilitate more effective approximate inference. To evaluate these ideas, we constructed a labeled corpus of 1.5 million disambiguated mentions in Web pages by selecting link anchors referring to Wikipedia entities. We show that the combination of the hierarchical model with distributed inference quickly obtains high accuracy (with error reduction of 38%) on this large dataset, demonstrating the scalability of our approach.
false
[]
[]
null
null
null
This work was done when the first author was an intern at Google Research. The authors would like to thank Mark Dredze, Sebastian Riedel, and anonymous reviewers for their valuable feedback. This work was supported in part by the Center for Intelligent Information Retrieval, the University of Massachusetts gratefully acknowledges the support of Defense Advanced Research Projects Agency (DARPA) Machine Reading Program under Air Force Research Laboratory (AFRL) prime contract no. FA8750-09-C-0181., in part by an award from Google, in part by The Central Intelligence Agency, the National Security Agency and National Science Foundation under NSF grant #IIS-0326249, in part by NSF grant #CNS-0958392, and in part by UPenn NSF medium IIS-0803847. Any opinions, findings and conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect those of the sponsor.
2011
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
dethlefs-etal-2014-cluster
https://aclanthology.org/E14-1074
Cluster-based Prediction of User Ratings for Stylistic Surface Realisation
Surface realisations typically depend on their target style and audience. A challenge in estimating a stylistic realiser from data is that humans vary significantly in their subjective perceptions of linguistic forms and styles, leading to almost no correlation between ratings of the same utterance. We address this problem in two steps. First, we estimate a mapping function between the linguistic features of a corpus of utterances and their human style ratings. Users are partitioned into clusters based on the similarity of their ratings, so that ratings for new utterances can be estimated, even for new, unknown users. In a second step, the estimated model is used to re-rank the outputs of a number of surface realisers to produce stylistically adaptive output. Results confirm that the generated styles are recognisable to human judges and that predictive models based on clusters of users lead to better rating predictions than models based on an average population of users.
false
[]
[]
null
null
null
Acknowledgements This research was funded by the EC FP7 programme FP7/2011-14 under grant agreements no. 270019 (SPACEBOOK) and no. 287615 (PARLANCE).
2014
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
yang-etal-2016-extraction
https://aclanthology.org/N16-2012
Extraction of Bilingual Technical Terms for Chinese-Japanese Patent Translation
The translation of patents or scientific papers is a key issue that should be helped by the use of statistical machine translation (SMT). In this paper, we propose a method to improve Chinese-Japanese patent SMT by premarking the training corpus with aligned bilingual multi-word terms. We automatically extract multi-word terms from monolingual corpora by combining statistical and linguistic filtering methods. We use the sampling-based alignment method to identify aligned terms and set some threshold on translation probabilities to select the most promising bilingual multi-word terms. We pre-mark a Chinese-Japanese training corpus with such selected aligned bilingual multi-word terms. We obtain the performance of over 70% precision in bilingual term extraction and a significant improvement of BLEU scores in our experiments on a Chinese-Japanese patent parallel corpus.
true
[]
[]
Industry, Innovation and Infrastructure
null
null
null
2016
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
magri-2014-error
https://aclanthology.org/W14-2802
The Error-driven Ranking Model of the Acquisition of Phonotactics: How to Keep the Faithfulness Constraints at Bay
A problem which arises in the theory of the error-driven ranking model of the acquisition of phonotactics is that the faithfulness constraints need to be promoted but should not be promoted too high. This paper motivates this technical problem and shows how to tune the promotion component of the re-ranking rule so as to keep the faithfulness constraints at bay.
false
[]
[]
null
null
null
This research was supported by a Marie Curie Intra European Fellowship within the 7th European
2014
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
philpot-etal-2005-omega
https://aclanthology.org/I05-7009
The Omega Ontology
null
false
[]
[]
null
null
null
null
2005
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
christodoulopoulos-etal-2016-incremental
https://aclanthology.org/W16-1906
An incremental model of syntactic bootstrapping
Syntactic bootstrapping is the hypothesis that learners can use the preliminary syntactic structure of a sentence to identify and characterise the meanings of novel verbs. Previous work has shown that syntactic bootstrapping can begin using only a few seed nouns (Connor et al., 2010; Connor et al., 2012). Here, we relax their key assumption: rather than training the model over the entire corpus at once (batch mode), we train the model incrementally, thus more realistically simulating a human learner. We also improve on the verb prediction method by incorporating the assumption that verb assignments are stable over time. We show that, given a high enough number of seed nouns (around 30), an incremental model achieves similar performance to the batch model. We also find that the number of seed nouns shown to be sufficient in the previous work is not sufficient under the more realistic incremental model. The results demonstrate that adopting more realistic assumptions about the early stages of language acquisition can provide new insights without undermining performance.
false
[]
[]
null
null
null
The authors would like to thank the anonymous reviewers for their suggestions. Many thanks also to Catriona Silvey for her help with the manuscript. This research is supported by NIH grant R01-HD054448-07.
2016
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
leeuwenberg-moens-2018-word
https://aclanthology.org/C18-1291
Word-Level Loss Extensions for Neural Temporal Relation Classification
Unsupervised pre-trained word embeddings are used effectively for many tasks in natural language processing to leverage unlabeled textual data. Often these embeddings are either used as initializations or as fixed word representations for task-specific classification models. In this work, we extend our classification model's task loss with an unsupervised auxiliary loss on the word-embedding level of the model. This is to ensure that the learned word representations contain both task-specific features, learned from the supervised loss component, and more general features learned from the unsupervised loss component. We evaluate our approach on the task of temporal relation extraction, in particular, narrative containment relation extraction from clinical records, and show that continued training of the embeddings on the unsupervised objective together with the task objective gives better task-specific embeddings, and results in an improvement over the state of the art on the THYME dataset, using only a general-domain part-of-speech tagger as linguistic resource.
false
[]
[]
null
null
null
The authors would like to thank the reviewers for their constructive comments which helped us to improve the paper. Also, we would like to thank the Mayo Clinic for permission to use the THYME corpus. This work was funded by the KU Leuven C22/15/16 project "MAchine Reading of patient recordS (MARS)", and by the IWT-SBO 150056 project "ACquiring CrUcial Medical information Using LAnguage TEchnology" (ACCUMULATE).
2018
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
pereira-etal-2010-learning
https://aclanthology.org/W10-0601
Learning semantic features for fMRI data from definitional text
Mitchell et al., 2008) showed that it was possible to use a text corpus to learn the value of hypothesized semantic features characterizing the meaning of a concrete noun. The authors also demonstrated that those features could be used to decompose the spatial pattern of fMRI-measured brain activation in response to a stimulus containing that noun and a picture of it. In this paper we introduce a method for learning such semantic features automatically from a text corpus, without needing to hypothesize them or provide any proxies for their presence on the text. We show that those features are effective in a more demanding classification task than that in (Mitchell et al., 2008) and describe their qualitative relationship to the features proposed in that paper.
true
[]
[]
Good Health and Well-Being
null
null
We would like to thank David Blei for discussions about topic modelling in general and of the Wikipedia corpus in particular and Ken Norman for valuable feedback at various stages of the work.
2010
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
false
tsai-lai-2018-functions
https://aclanthology.org/Y18-1078
The Functions of Must-constructions in Spoken Corpus: A Constructionist Perspective
This study investigates must constructions in the Spoken British National Corpus 2014 (Spoken BNC2014). A constructionist perspective is taken to examine the structure and distribution of must constructions in the spoken corpus. Moreover, a conversational analysis is conducted to identify the functions of must constructions as they are used in communication. Adopting corpus analytical procedures, we identified two major must constructions, [must+be] and [must+"ve/have], whose central member [there+must+be+some] conducts the topic extending function while [she+must+"ve/have+been] is related to the speaker"s evaluation of the condition of an individual identified as she. On the other hand, although [must+Verb] does not have a very high type frequency, its central member [I+must+admit+I] performs an important interpersonal function in minimizing possible negative impact brought about by the speaker"s comment. The findings suggest that the central members of must constructions exhibit dynamic and interactive functions in daily conversations.
false
[]
[]
null
null
null
This work was supported in part by the Ministry of Education under the Grants 107H121-08.
2018
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
power-1999-generating
https://aclanthology.org/E99-1002
Generating referring expressions with a unification grammar
A simple formalism is proposed to represent the contexts in which pronouns, definite/indefinite descriptions, and ordinal descriptions (e.g. 'the second book') can be used, and the way in which these expressions change the context. It is shown that referring expressions can be generated by a unification grammar provided that some phrase-structure rules are specially tailored to express entities in the current knowledge base.
false
[]
[]
null
null
null
null
1999
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
chandrahas-etal-2020-inducing
https://aclanthology.org/2020.icon-main.9
Inducing Interpretability in Knowledge Graph Embeddings
We study the problem of inducing interpretability in Knowledge Graph (KG) embeddings. Learning KG embeddings has been an active area of research in the past few years, resulting in many different models. However, most of these methods do not address the interpretability (semantics) of individual dimensions of the learned embeddings. In this work, we study this problem and propose a method for inducing interpretability in KG embeddings using entity co-occurrence statistics. The proposed method significantly improves the interpretability, while maintaining comparable performance in other KG tasks.
false
[]
[]
null
null
null
We thank the anonymous reviewers for their constructive comments. This work is supported by the Ministry of Human Resources Development (Government of India).
2020
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
coltekin-2010-freely
http://www.lrec-conf.org/proceedings/lrec2010/pdf/109_Paper.pdf
A Freely Available Morphological Analyzer for Turkish
This paper presents TRmorph, a two-level morphological analyzer for Turkish. TRmorph is a fairly complete and accurate morphological analyzer for Turkish. However, strength of TRmorph is neither in its performance, nor in its novelty. The main feature of this analyzer is its availability. It has completely been implemented using freely available tools and resources, and the two-level description is also distributed with a license that allows others to use and modify it freely for different applications. To our knowledge, TRmorph is the first freely available morphological analyzer for Turkish. This makes TRmorph particularly suitable for applications where the analyzer has to be changed in some way, or as a starting point for morphological analyzers for similar languages. TRmorph's specification of Turkish morphology is relatively complete, and it is distributed with a large lexicon. Along with the description of how the analyzer is implemented, this paper provides an evaluation of the analyzer on two large corpora.
false
[]
[]
null
null
null
null
2010
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
hu-etal-2018-texar
https://aclanthology.org/W18-2503
Texar: A Modularized, Versatile, and Extensible Toolbox for Text Generation
We introduce Texar, an open-source toolkit aiming to support the broad set of text generation tasks. Different from many existing toolkits that are specialized for specific applications (e.g., neural machine translation), Texar is designed to be highly flexible and versatile. This is achieved by abstracting the common patterns underlying the diverse tasks and methodologies, creating a library of highly reusable modules and functionalities, and enabling arbitrary model architectures and various algorithmic paradigms. The features make Texar particularly suitable for technique sharing and generalization across different text generation applications. The toolkit emphasizes heavily on extensibility and modularized system design, so that components can be freely plugged in or swapped out. We conduct extensive experiments and case studies to demonstrate the use and advantage of the toolkit.
false
[]
[]
null
null
null
null
2018
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
montariol-allauzen-2019-empirical
https://aclanthology.org/R19-1092
Empirical Study of Diachronic Word Embeddings for Scarce Data
Word meaning change can be inferred from drifts of time-varying word embeddings. However, temporal data may be too sparse to build robust word embeddings and to discriminate significant drifts from noise. In this paper, we compare three models to learn diachronic word embeddings on scarce data: incremental updating of a Skip-Gram from Kim et al. (2014), dynamic filtering from Bamler and Mandt (2017), and dynamic Bernoulli embeddings from Rudolph and Blei (2018). In particular, we study the performance of different initialisation schemes and emphasise what characteristics of each model are more suitable to data scarcity, relying on the distribution of detected drifts. Finally, we regularise the loss of these models to better adapt to scarce data.
false
[]
[]
null
null
null
null
2019
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
mostafazadeh-davani-etal-2021-improving
https://aclanthology.org/2021.woah-1.10
Improving Counterfactual Generation for Fair Hate Speech Detection
Bias mitigation approaches reduce models' dependence on sensitive features of data, such as social group tokens (SGTs), resulting in equal predictions across the sensitive features. In hate speech detection, however, equalizing model predictions may ignore important differences among targeted social groups, as hate speech can contain stereotypical language specific to each SGT. Here, to take the specific language about each SGT into account, we rely on counterfactual fairness and equalize predictions among counterfactuals, generated by changing the SGTs. Our method evaluates the similarity in sentence likelihoods (via pretrained language models) among counterfactuals, to treat SGTs equally only within interchangeable contexts. By applying logit pairing to equalize outcomes on the restricted set of counterfactuals for each instance, we improve fairness metrics while preserving model performance on hate speech detection.
true
[]
[]
Peace, Justice and Strong Institutions
null
null
This research was sponsored in part by NSF CA-REER BCS-1846531 to Morteza Dehghani.
2021
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
true
false
wu-etal-2018-phrase
https://aclanthology.org/D18-1408
Phrase-level Self-Attention Networks for Universal Sentence Encoding
Universal sentence encoding is a hot topic in recent NLP research. Attention mechanism has been an integral part in many sentence encoding models, allowing the models to capture context dependencies regardless of the distance between elements in the sequence. Fully attention-based models have recently attracted enormous interest due to their highly parallelizable computation and significantly less training time. However, the memory consumption of their models grows quadratically with sentence length, and the syntactic information is neglected. To this end, we propose Phrase-level Self-Attention Networks (PSAN) that perform self-attention across words inside a phrase to capture context dependencies at the phrase level, and use the gated memory updating mechanism to refine each word's representation hierarchically with longer-term context dependencies captured in a larger phrase. As a result, the memory consumption can be reduced because the self-attention is performed at the phrase level instead of the sentence level. At the same time, syntactic information can be easily integrated in the model. Experiment results show that PSAN can achieve the state-ofthe-art transfer performance across a plethora of NLP tasks including sentence classification, natural language inference and sentence textual similarity.
false
[]
[]
null
null
null
null
2018
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
wei-gulla-2010-sentiment
https://aclanthology.org/P10-1042
Sentiment Learning on Product Reviews via Sentiment Ontology Tree
Existing works on sentiment analysis on product reviews suffer from the following limitations: (1) The knowledge of hierarchical relationships of products attributes is not fully utilized. (2) Reviews or sentences mentioning several attributes associated with complicated sentiments are not dealt with very well. In this paper, we propose a novel HL-SOT approach to labeling a product's attributes and their associated sentiments in product reviews by a Hierarchical Learning (HL) process with a defined Sentiment Ontology Tree (SOT). The empirical analysis against a humanlabeled data set demonstrates promising and reasonable performance of the proposed HL-SOT approach. While this paper is mainly on sentiment analysis on reviews of one product, our proposed HL-SOT approach is easily generalized to labeling a mix of reviews of more than one products.
false
[]
[]
null
null
null
The authors would like to thank the anonymous reviewers for many helpful comments on the manuscript. This work is funded by the Research Council of Norway under the VERDIKT research programme (Project No.: 183337).
2010
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
bloem-etal-2019-modeling
https://aclanthology.org/W19-4733
Modeling a Historical Variety of a Low-Resource Language: Language Contact Effects in the Verbal Cluster of Early-Modern Frisian
Certain phenomena of interest to linguists mainly occur in low-resource languages, such as contact-induced language change. We show that it is possible to study contact-induced language change computationally in a historical variety of a low-resource language, Early-Modern Frisian, by creating a model using features that were established to be relevant in a closely related language, modern Dutch. This allows us to test two hypotheses on two types of language contact that may have taken place between Frisian and Dutch during this time. Our model shows that Frisian verb cluster word orders are associated with different context features than Dutch verb orders, supporting the 'learned borrowing' hypothesis.
false
[]
[]
null
null
null
null
2019
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
dahl-mccord-1983-treating
https://aclanthology.org/J83-2002
Treating Coordination in Logic Grammars
Logic grammars are grammars expressible in predicate logic. Implemented in the programming language Prolog, logic grammar systems have proved to be a good basis for natural language processing. One of the most difficult constructions for natural language grammars to treat is coordination (construction with conjunctions like 'and'). This paper describes a logic grammar formalism, modifier structure grammars (MSGs), together with an interpreter written in Prolog, which can handle coordination (and other natural language constructions) in a reasonable and general way. The system produces both syntactic analyses and logical forms, and problems of scoping for coordination and quantifiers are dealt with. The MSG formalism seems of interest in its own right (perhaps even outside natural language processing) because the notions of syntactic structure and semantic interpretation are more constrained than in many previous systems (made more implicit in the formalism itself), so that less burden is put on the grammar writer.
false
[]
[]
null
null
null
null
1983
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
rangarajan-sridhar-etal-2013-segmentation
https://aclanthology.org/N13-1023
Segmentation Strategies for Streaming Speech Translation
The study presented in this work is a first effort at real-time speech translation of TED talks, a compendium of public talks with different speakers addressing a variety of topics. We address the goal of achieving a system that balances translation accuracy and latency. In order to improve ASR performance for our diverse data set, adaptation techniques such as constrained model adaptation and vocal tract length normalization are found to be useful. In order to improve machine translation (MT) performance, techniques that could be employed in real-time such as monotonic and partial translation retention are found to be of use. We also experiment with inserting text segmenters of various types between ASR and MT in a series of real-time translation experiments. Among other results, our experiments demonstrate that a good segmentation is useful, and a novel conjunction-based segmentation strategy improves translation quality nearly as much as other strategies such as comma-based segmentation. It was also found to be important to synchronize various pipeline components in order to minimize latency.
false
[]
[]
null
null
null
We would like to thank Simon Byers for his help with organizing the TED talks data.
2013
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
gupta-etal-2015-dissecting
https://aclanthology.org/S15-1017
Dissecting the Practical Lexical Function Model for Compositional Distributional Semantics
The Practical Lexical Function model (PLF) is a recently proposed compositional distributional semantic model which provides an elegant account of composition, striking a balance between expressiveness and robustness and performing at the state-of-the-art. In this paper, we identify an inconsistency in PLF between the objective function at training and the prediction at testing which leads to an overcounting of the predicate's contribution to the meaning of the phrase. We investigate two possible solutions of which one (the exclusion of simple lexical vector at test time) improves performance significantly on two out of the three composition datasets.
false
[]
[]
null
null
null
We gratefully acknowledge funding of our research by the DFG (SFB 732, Project D10).
2015
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
yoon-1996-danger
https://aclanthology.org/Y96-1045
Danger of Partial Universality : In Two Uses of In-adverbials
Not all empirical facts are treated equally in science; in theorizing, some are weighed more heavily than others. It is often unavoidable and it should not necessarily be avoided. We will present a case where a semantic theory is influenced more by a seemingly universal fact, but in fact accidental among related languages, than by a few significant exceptions in the language in question, thereby failing to capture a meaningful generalization. In particular, we argue that in-adverbials are not a test for telic predicates, as they are popularly claimed to be; we will show that this claim is triggered by the accidental fact that there are two homomorphic in-adverbials in English and their cognates in other languages.
false
[]
[]
null
null
null
null
1996
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
wachsmuth-etal-2017-building
https://aclanthology.org/W17-5106
Building an Argument Search Engine for the Web
Computational argumentation is expected to play a critical role in the future of web search. To make this happen, many searchrelated questions must be revisited, such as how people query for arguments, how to mine arguments from the web, or how to rank them. In this paper, we develop an argument search framework for studying these and further questions. The framework allows for the composition of approaches to acquiring, mining, assessing, indexing, querying, retrieving, ranking, and presenting arguments while relying on standard infrastructure and interfaces. Based on the framework, we build a prototype search engine, called args, that relies on an initial, freely accessible index of nearly 300k arguments crawled from reliable web resources. The framework and the argument search engine are intended as an environment for collaborative research on computational argumentation and its practical evaluation.
false
[]
[]
null
null
null
null
2017
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
williams-liden-2017-demonstration
https://aclanthology.org/W17-5511
Demonstration of interactive teaching for end-to-end dialog control with hybrid code networks
This is a demonstration of interactive teaching for practical end-to-end dialog systems driven by a recurrent neural network. In this approach, a developer teaches the network by interacting with the system and providing on-the-spot corrections. Once a system is deployed, a developer can also correct mistakes in logged dialogs. This demonstration shows both of these teaching methods applied to dialog systems in three domains: pizza ordering, restaurant information, and weather forecasts.
false
[]
[]
null
null
null
null
2017
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
abb-etal-1993-incremental
https://aclanthology.org/E93-1002
The Incremental Generation of Passive Sentences
This paper sketches some basic features of the SYNPHONICS account of the computational modelling of incremental language production with the example of the generation of passive sentences. The SYNPHONICS approach aims at linking psycholinguistic insights into the nature of the human natural language production process with well-established assumptions in theoretical and computational linguistics concerning the representation and processing of grammatical knowledge. We differentiate between
false
[]
[]
null
null
null
null
1993
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
kuncham-etal-2015-statistical
https://aclanthology.org/R15-1042
Statistical Sandhi Splitter and its Effect on NLP Applications
This paper revisits the work of (Kuncham et al., 2015) which developed a statistical sandhi splitter (SSS) for agglutinative languages that was tested for Telugu and Malayalam languages. Handling compound words is a major challenge for Natural Language Processing (NLP) applications for agglutinative languages. Hence, in this paper we concentrate on testing the effect of SSS on the NLP applications like Machine Translation, Dialogue System and Anaphora Resolution and show that the accuracy of these applications is consistently improved by using SSS. We shall also discuss in detail the performance of SSS on these applications.
false
[]
[]
null
null
null
null
2015
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
lux-etal-2020-truth
https://aclanthology.org/2020.eval4nlp-1.1
Truth or Error? Towards systematic analysis of factual errors in abstractive summaries
This paper presents a typology of errors produced by automatic summarization systems. The typology was created by manually analyzing the output of four recent neural summarization systems. Our work is motivated by the growing awareness of the need for better summary evaluation methods that go beyond conventional overlap-based metrics. Our typology is structured into two dimensions. First, the Mapping Dimension describes surface-level errors and provides insight into word-sequence transformation issues. Second, the Meaning Dimension describes issues related to interpretation and provides insight into breakdowns in truth, i.e., factual faithfulness to the original text. Comparative analysis revealed that two neural summarization systems leveraging pretrained models have an advantage in decreasing grammaticality errors, but not necessarily factual errors. We also discuss the importance of ensuring that summary length and abstractiveness do not interfere with evaluating summary quality.
false
[]
[]
null
null
null
Acknowledgments: We thank FD Mediagroep for conducting the Smart Journalism project which allowed us to perform this research.
2020
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
islam-etal-2012-text
https://aclanthology.org/Y12-1059
Text Readability Classification of Textbooks of a Low-Resource Language
There are many languages considered to be low-density languages, either because the population speaking the language is not very large, or because insufficient digitized text material is available in the language even though millions of people speak the language. Bangla is one of the latter ones. Readability classification is an important Natural Language Processing (NLP) application that can be used to judge the quality of documents and assist writers to locate possible problems. This paper presents a readability classifier of Bangla textbook documents based on information-theoretic and lexical features. The features proposed in this paper result in an F-score that is 50% higher than that for traditional readability formulas.
true
[]
[]
Quality Education
null
null
We would like to thank Mr. Munir Hasan from the Bangladesh Open Source Network (BdOSN) and Mr. Murshid Aktar from the National Curriculum & Textbook Board Authority, Bangladesh for their help on corpus collection. We would also like to thank Andy Lücking, Paul Warner and Armin Hoenen for their fruitful suggestions and comments. Finally, we thank three anonymous reviewers. This work is funded by the LOEWE Digital-Humanities project in the Goethe-Universität Frankfurt.
2012
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false