ID
stringlengths
11
54
url
stringlengths
33
64
title
stringlengths
11
184
abstract
stringlengths
17
3.87k
label_nlp4sg
bool
2 classes
task
sequence
method
sequence
goal1
stringclasses
9 values
goal2
stringclasses
9 values
goal3
stringclasses
1 value
acknowledgments
stringlengths
28
1.28k
year
stringlengths
4
4
sdg1
bool
1 class
sdg2
bool
1 class
sdg3
bool
2 classes
sdg4
bool
2 classes
sdg5
bool
2 classes
sdg6
bool
1 class
sdg7
bool
1 class
sdg8
bool
2 classes
sdg9
bool
2 classes
sdg10
bool
2 classes
sdg11
bool
2 classes
sdg12
bool
1 class
sdg13
bool
2 classes
sdg14
bool
1 class
sdg15
bool
1 class
sdg16
bool
2 classes
sdg17
bool
2 classes
babych-etal-2007-translating
https://aclanthology.org/2007.mtsummit-papers.5.pdf
Translating from under-resourced languages: comparing direct transfer against pivot translation
In this paper we compare two methods for translating into English from languages for which few MT resources have been developed (e.g. Ukrainian). The first method involves direct transfer using an MT system that is available for this language pair. The second method involves translation via a cognate language, which has more translation resources and one or more advanced translation systems (e.g. Russian for Slavonic languages). The comparison shows that it is possible to achieve better translation quality via the pivot language, leveraging on advanced dictionaries and grammars available for it and on lexical and syntactic similarities between the source and pivot languages. The results suggest that MT development efforts can be efficiently reused for families of closely related languages, and investing in MT for closely related languages can be more productive than developing systems from scratch for new translation directions. We also suggest a method for comparing the performance of a direct and pivot translation routes via automated evaluation of segments with varying translation difficulty.
false
[]
[]
null
null
null
null
2007
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
liu-2010-detecting
https://aclanthology.org/W10-0503.pdf
Detecting Word Misuse in Chinese
Social Network Service (SNS) and personal blogs have become the most popular platform for online communication and sharing information. However because most modern computer keyboards are Latin-based, Asian language speakers (such as Chinese) has to rely on a input system which accepts Romanisation of the characters and convert them into characters or words in that language. In Chinese this form of Romanisation (usually called Pinyin) is highly ambiguous, word misuses often occur because the user choose a wrong candidate or deliverately substitute the word with another character string that has the identical Romanisation to convey certain semantics, or to achieve a sarcasm effect. In this paper we aim to develop a system that can automatically identify such word misuse, and suggest the correct word to be used.
false
[]
[]
null
null
null
null
2010
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
jiang-etal-2020-know
https://aclanthology.org/2020.tacl-1.28.pdf
How Can We Know What Language Models Know?
Recent work has presented intriguing results examining the knowledge contained in language models (LMs) by having the LM fill in the blanks of prompts such as ''Obama is a by profession''. These prompts are usually manually created, and quite possibly sub-optimal; another prompt such as ''Obama worked as a '' may result in more accurately predicting the correct profession. Because of this, given an inappropriate prompt, we might fail to retrieve facts that the LM does know, and thus any given prompt only provides a lower bound estimate of the knowledge contained in an LM. In this paper, we attempt to more accurately estimate the knowledge contained in LMs by automatically discovering better prompts to use in this querying process. Specifically, we propose mining-based and paraphrasing-based methods to automatically generate high-quality and diverse prompts, as well as ensemble methods to combine answers from different prompts. Extensive experiments on the LAMA benchmark for extracting relational knowledge from LMs demonstrate that our methods can improve accuracy from 31.1% to 39.6%, providing a tighter lower bound on what LMs know.
false
[]
[]
null
null
null
This work was supported by a gift from Bosch Research and NSF award no. 1815287. We would like to thank Paul Michel, Hiroaki Hayashi, Pengcheng Yin, and Shuyan Zhou for their insightful comments and suggestions.
2020
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
joshi-etal-2013-making
https://aclanthology.org/I13-2006.pdf
Making Headlines in Hindi: Automatic English to Hindi News Headline Translation
News headlines exhibit stylistic peculiarities. The goal of our translation engine 'Making Headlines in Hindi' is to achieve automatic translation of English news headlines to Hindi while retaining the Hindi news headline styles. There are two central modules of our engine: the modified translation unit based on Moses and a co-occurrencebased post-processing unit. The modified translation unit provides two machine translation (MT) models: phrase-based and factor-based (both using in-domain data). In addition, a co-occurrence-based post-processing option may be turned on by a user. Our evaluation shows that this engine handles some linguistic phenomena observed in Hindi news headlines.
false
[]
[]
null
null
null
null
2013
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
tolmachev-etal-2019-shrinking
https://aclanthology.org/N19-1281.pdf
Shrinking Japanese Morphological Analyzers With Neural Networks and Semi-supervised Learning
For languages without natural word boundaries, like Japanese and Chinese, word segmentation is a prerequisite for downstream analysis. For Japanese, segmentation is often done jointly with part of speech tagging, and this process is usually referred to as morphological analysis. Morphological analyzers are trained on data hand-annotated with segmentation boundaries and part of speech tags. A segmentation dictionary or character n-gram information is also provided as additional inputs to the model. Incorporating this extra information makes models large. Modern neural morphological analyzers can consume gigabytes of memory. We propose a compact alternative to these cumbersome approaches which do not rely on any externally provided n-gram or word representations. The model uses only unigram character embeddings, encodes them using either stacked bi-LSTM or a self-attention network, and independently infers both segmentation and part of speech information. The model is trained in an end-to-end and semisupervised fashion, on labels produced by a state-of-the-art analyzer. We demonstrate that the proposed technique rivals performance of a previous dictionary-based state-of-the-art approach and can even surpass it when training with the combination of human-annotated and automatically-annotated data. Our model itself is significantly smaller than the dictionarybased one: it uses less than 15 megabytes of space.
false
[]
[]
null
null
null
null
2019
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
khayrallah-etal-2018-jhu
https://aclanthology.org/W18-6479.pdf
The JHU Parallel Corpus Filtering Systems for WMT 2018
This work describes our submission to the WMT18 Parallel Corpus Filtering shared task. We use a slightly modified version of the Zipporah Corpus Filtering toolkit (Xu and Koehn, 2017), which computes an adequacy score and a fluency score on a sentence pair, and use a weighted sum of the scores as the selection criteria. This work differs from Zipporah in that we experiment with using the noisy corpus to be filtered to compute the combination weights, and thus avoids generating synthetic data as in standard Zipporah.
false
[]
[]
null
null
null
This work was in part supported by the IARPA MATERIAL project and a Google Faculty Research Award.
2018
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
ashihara-etal-2019-contextualized
https://aclanthology.org/D19-5552.pdf
Contextualized context2vec
Lexical substitution ranks substitution candidates from the viewpoint of paraphrasability for a target word in a given sentence. There are two major approaches for lexical substitution: (1) generating contextualized word embeddings by assigning multiple embeddings to one word and (2) generating context embeddings using the sentence. Herein we propose a method that combines these two approaches to contextualize word embeddings for lexical substitution. Experiments demonstrate that our method outperforms the current state-ofthe-art method. We also create CEFR-LP, a new evaluation dataset for the lexical substitution task. It has a wider coverage of substitution candidates than previous datasets and assigns English proficiency levels to all target words and substitution candidates.
false
[]
[]
null
null
null
We thank Professor Christopher G. Haswell for his valuable comments and discussions. We also thank the anonymous reviewers for their valuable comments. This research was supported by the KDDI Foundation.
2019
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
gonzalez-rubio-etal-2010-saturnalia
http://www.lrec-conf.org/proceedings/lrec2010/pdf/541_Paper.pdf
Saturnalia: A Latin-Catalan Parallel Corpus for Statistical MT
Currently, a great effort is being carried out in the digitalisation of large historical document collections for preservation purposes. The documents in these collections are usually written in ancient languages, such as Latin or Greek, which limits the access of the general public to their content due to the language barrier. Therefore, digital libraries aim not only at storing raw images of digitalised documents, but also to annotate them with their corresponding text transcriptions and translations into modern languages. Unfortunately, ancient languages have at their disposal scarce electronic resources to be exploited by natural language processing techniques. This paper describes the compilation process of a novel Latin-Catalan parallel corpus as a new task for statistical machine translation (SMT). Preliminary experimental results are also reported using a state-of-the-art phrase-based SMT system. The results presented in this work reveal the complexity of the task and its challenging, but interesting nature for future development.
false
[]
[]
null
null
null
null
2010
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
li-etal-2013-multi
https://aclanthology.org/W13-3101.pdf
Multi-document multilingual summarization corpus preparation, Part 1: Arabic, English, Greek, Chinese, Romanian
This document overviews the strategy, effort and aftermath of the MultiLing 2013 multilingual summarization data collection. We describe how the Data Contributors of MultiLing collected and generated a multilingual multi-document summarization corpus on 10 different languages: Arabic, Chinese, Czech, English, French, Greek, Hebrew, Hindi, Romanian and Spanish. We discuss the rationale behind the main decisions of the collection, the methodology used to generate the multilingual corpus, as well as challenges and problems faced per language. This paper overviews the work on Arabic, Chinese, English, Greek, and Romanian languages. A second part, covering the remaining languages, is available as a distinct paper in the MultiLing 2013 proceedings.
false
[]
[]
null
null
null
null
2013
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
liu-etal-2020-hiring
https://aclanthology.org/2020.acl-main.281.pdf
Hiring Now: A Skill-Aware Multi-Attention Model for Job Posting Generation
Writing a good job posting is a critical step in the recruiting process, but the task is often more difficult than many people think. It is challenging to specify the level of education, experience, relevant skills per the company information and job description. To this end, we propose a novel task of Job Posting Generation (JPG) that is cast as a conditional text generation problem to generate job requirements according to the job descriptions. To deal with this task, we devise a data-driven global Skill-Aware Multi-Attention generation model, named SAMA. Specifically, to model the complex mapping relationships between input and output, we design a hierarchical decoder that we first label the job description with multiple skills, then we generate a complete text guided by the skill labels. At the same time, to exploit the prior knowledge about the skills, we further construct a skill knowledge graph to capture the global prior knowledge of skills and refine the generated results. The proposed approach is evaluated on real-world job posting data. Experimental results clearly demonstrate the effectiveness of the proposed method 1 .
true
[]
[]
Decent Work and Economic Growth
null
null
null
2020
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
zeng-etal-2019-neural
https://aclanthology.org/D19-1470.pdf
Neural Conversation Recommendation with Online Interaction Modeling
The prevalent use of social media leads to a vast amount of online conversations being produced on a daily basis. It presents a concrete challenge for individuals to better discover and engage in social media discussions. In this paper, we present a novel framework to automatically recommend conversations to users based on their prior conversation behaviors. Built on neural collaborative filtering, our model explores deep semantic features that measure how a user's preferences match an ongoing conversation's context. Furthermore, to identify salient characteristics from interleaving user interactions, our model incorporates graph-structured networks, where both replying relations and temporal features are encoded as conversation context. Experimental results on two large-scale datasets collected from Twitter and Reddit show that our model yields better performance than previous stateof-the-art models, which only utilize lexical features and ignore past user interactions in the conversations.
false
[]
[]
null
null
null
This work is partially supported by the following HK grants: RGC-GRF (14232816, 14209416, 14204118, 3133237), NSFC (61877020) & ITF (ITS/335/18). Lu Wang is supported in part by National Science Foundation through Grants IIS-1566382 and IIS-1813341. We thank the three anonymous reviewers for the insightful suggestions on various aspects of this work.
2019
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
hope-etal-2021-extracting
https://aclanthology.org/2021.naacl-main.355.pdf
Extracting a Knowledge Base of Mechanisms from COVID-19 Papers
The COVID-19 pandemic has spawned a diverse body of scientific literature that is challenging to navigate, stimulating interest in automated tools to help find useful knowledge. We pursue the construction of a knowledge base (KB) of mechanisms-a fundamental concept across the sciences, which encompasses activities, functions and causal relations, ranging from cellular processes to economic impacts. We extract this information from the natural language of scientific papers by developing a broad, unified schema that strikes a balance between relevance and breadth. We annotate a dataset of mechanisms with our schema and train a model to extract mechanism relations from papers. Our experiments demonstrate the utility of our KB in supporting interdisciplinary scientific search over COVID-19 literature, outperforming the prominent PubMed search in a study with clinical experts. Our search engine, dataset and code are publicly available. 1 * * Equal contribution. 1 https://covidmechanisms.apps.allenai.org/ … a deep learning framework for design of antiviral candidate drugs Temperature increase can facilitate the destruction of SARS-COV-2 gpl16 antiserum blocks binding of virions to cellular receptors ...food price inflation is an unintended consequence of COVID-19 containment measures Retrieved from CORD-19 papers Ent1: deep learning Ent2: drugs Query mechanism relations
true
[]
[]
Good Health and Well-Being
Industry, Innovation and Infrastructure
null
We like to acknowledge a grant from ONR N00014-18-1-2826. Authors would also like to thank anonymous reviewers, members of AI2, UW-NLP and the H2Lab at The University of Washington for their valuable feedback and comments.
2021
false
false
true
false
false
false
false
false
true
false
false
false
false
false
false
false
false
paris-vander-linden-1996-building
https://aclanthology.org/C96-2124.pdf
Building Knowledge Bases for the Generation of Software Documentation
Automated text generation requires a underlying knowledge base fl'om which to generate, which is often difficult to produce. Software documentation is one domain in which parts of this knowledge base may be derived automatically. In this paper, we describe DRAFTER, an authoring support tool for generating usercentred software documentation, and in particular, we describe how parts of its required knowledge base can be obtained automatically.
true
[]
[]
Industry, Innovation and Infrastructure
null
null
null
1996
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
han-etal-2019-opennre
https://aclanthology.org/D19-3029.pdf
OpenNRE: An Open and Extensible Toolkit for Neural Relation Extraction
OpenNRE is an open-source and extensible toolkit that provides a unified framework to implement neural models for relation extraction (RE). Specifically, by implementing typical RE methods, OpenNRE not only allows developers to train custom models to extract structured relational facts from the plain text but also supports quick model validation for researchers. Besides, OpenNRE provides various functional RE modules based on both TensorFlow and PyTorch to maintain sufficient modularity and extensibility, making it becomes easy to incorporate new models into the framework. Besides the toolkit, we also release an online system to meet real-time extraction without any training and deploying. Meanwhile, the online system can extract facts in various scenarios as well as aligning the extracted facts to Wikidata, which may benefit various downstream knowledge-driven applications (e.g., information retrieval and question answering). More details of the toolkit and online system can be obtained from http://github.com/ thunlp/OpenNRE.
false
[]
[]
null
null
null
This work is supported by the National Key Research and Development Program of China (No.
2019
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
albogamy-ramsay-2016-fast
https://aclanthology.org/L16-1238.pdf
Fast and Robust POS tagger for Arabic Tweets Using Agreement-based Bootstrapping
Part-of-Speech (POS) tagging is a key step in many NLP algorithms. However, tweets are difficult to POS tag because they are short, are not always written maintaining formal grammar and proper spelling, and abbreviations are often used to overcome their restricted lengths. Arabic tweets also show a further range of linguistic phenomena such as usage of different dialects, romanised Arabic and borrowing foreign words. In this paper, we present an evaluation and a detailed error analysis of state-of-the-art POS taggers for Arabic when applied to Arabic tweets. On the basis of this analysis, we combine normalisation and external knowledge to handle the domain noisiness and exploit bootstrapping to construct extra training data in order to improve POS tagging for Arabic tweets. Our results show significant improvements over the performance of a number of well-known taggers for Arabic.
false
[]
[]
null
null
null
The authors would like to thank the anonymous reviewers for their encouraging feedback and insights. Fahad would also like to thank King Saud University for their financial support. Allan Ramsay's contribution to this work was partially supported by Qatar National Research Foundation (grant NPRP-7-1334-6 -039).
2016
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
danescu-niculescu-mizil-etal-2009-without
https://aclanthology.org/N09-1016.pdf
Without a 'doubt'? Unsupervised Discovery of Downward-Entailing Operators
An important part of textual inference is making deductions involving monotonicity, that is, determining whether a given assertion entails restrictions or relaxations of that assertion. For instance, the statement 'We know the epidemic spread quickly' does not entail 'We know the epidemic spread quickly via fleas', but 'We doubt the epidemic spread quickly' entails 'We doubt the epidemic spread quickly via fleas'. Here, we present the first algorithm for the challenging lexical-semantics problem of learning linguistic constructions that, like 'doubt', are downward entailing (DE). Our algorithm is unsupervised, resource-lean, and effective, accurately recovering many DE operators that are missing from the handconstructed lists that textual-inference systems currently use.
false
[]
[]
null
null
null
Acknowledgments We thank Roy Bar-Haim, Cleo Condoravdi, and Bill MacCartney for sharing their systems' lists and information about their work with us; Mats Rooth for helpful conversations; Alex Niculescu-Mizil for technical assistance; and Eugene Charniak for reassuring remarks. We also thank Marisa Ferrara Boston, Claire Cardie, Zhong Chen, Yejin Choi, Effi Georgala, Myle Ott, Stephen Purpura, and Ainur Yessenalina at Cornell University, the UT-Austin NLP group, Roy Bar-Haim, Bill MacCartney, and the anonymous reviewers for for their comments on this paper. This paper is based upon work supported in part by DHS grant N0014-07-1-0152, National Science Foundation grant No. BCS-0537606, a Yahoo! Research Alliance gift, a CU Provost's Award for Distinguished Scholarship, and a CU Institute for the Social Sciences Faculty Fellowship. Any opinions, findings, and conclusions or recommendations expressed are those of the authors and do not necessarily reflect the views or official policies, either expressed or implied, of any sponsoring institutions, the U.S. government, or any other entity.
2009
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
acl-1993-association
https://aclanthology.org/P93-1000.pdf
31st Annual Meeting of the Association for Computational Linguistics
This volume contains the papers prepared for the 31 st Annual Meeting of the Association for Computational Linguistics, held 22-26 June 1993 at The Ohio State University in Columbus, Ohio. The cluster of papers in the final section stems from the student session, featured at the meeting for the 3rd successive year and testifying to the vigor of this emerging tradition. The number and quality of submitted papers was again gratifying, and all authors deserve our collective plaudits for the efforts they invested despite the well-known risks of submitting to a highly selective conference. It was their efforts that once again ensured a Meeting (and Proceedings) reflecting the highest standards in computational linguistics, offering a tour of some of the most significant recent advances and most lively research frontiers. Special thanks go to our invited speakers, Wolfgang Wahlster, Geoff Nunberg and Barbara Partee, for contributing their insights and panache to the conference; to Philip Cohen for concocting and coordinating a varied and relevant tutorial program, and to
false
[]
[]
null
null
null
We thank the reviewers for providing providing helpful, detailed reviews of the submissions, and for completing the reviews promptly. The careful thought that went into their review comments was obvious and impressive, and we are sure the student authors found the reviews beneficial. The Program Committee included the members of the Planning Committee and the following non-student members: Mary Dal-
1993
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
hakala-etal-2013-evex
https://aclanthology.org/W13-2004.pdf
EVEX in ST'13: Application of a large-scale text mining resource to event extraction and network construction
During the past few years, several novel text mining algorithms have been developed in the context of the BioNLP Shared Tasks on Event Extraction. These algorithms typically aim at extracting biomolecular interactions from text by inspecting only the context of one sentence. However, when humans interpret biomolecular research articles, they usually build upon extensive background knowledge of their favorite genes and pathways. To make such world knowledge available to a text mining algorithm, it could first be applied to all available literature to subsequently make a more informed decision on which predictions are consistent with the current known data. In this paper, we introduce our participation in the latest Shared Task using the largescale text mining resource EVEX which we previously implemented using state-ofthe-art algorithms, and which was applied to the whole of PubMed and PubMed Central. We participated in the Genia Event Extraction (GE) and Gene Regulation Network (GRN) tasks, ranking first in the former and fifth in the latter.
false
[]
[]
null
null
null
Computational resources were provided by CSC IT Center for Science Ltd., Espoo, Finland. The work of KH and FG was supported by the Academy of Finland, and of SVL by the Research Foundation Flanders (FWO). YVdP and SVL acknowledge the support from Ghent University (Multidisciplinary Research Partnership Bioinformatics: from nucleotides to networks).
2013
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
banerjee-etal-2021-scrambled
https://aclanthology.org/2021.mtsummit-research.11.pdf
Scrambled Translation Problem: A Problem of Denoising UNMT
In this paper, we identify an interesting kind of error in the output of Unsupervised Neural Machine Translation (UNMT) systems like Undreamt 1. We refer to this error type as Scrambled Translation problem. We observe that UNMT models which use word shuffle noise (as in case of Undreamt) can generate correct words, but fail to stitch them together to form phrases. As a result, words of the translated sentence look scrambled, resulting in decreased BLEU. We hypothesise that the reason behind scrambled translation problem is 'shuffling noise' which is introduced in every input sentence as a denoising strategy. To test our hypothesis, we experiment by retraining UNMT models with a simple retraining strategy. We stop the training of the Denoising UNMT model after a pre-decided number of iterations and resume the training for the remaining iterations-which number is also pre-decided-using original sentence as input without adding any noise. Our proposed solution achieves significant performance improvement UNMT models that train conventionally. We demonstrate these performance gains on four language pairs, viz., English-French, English-German, English-Spanish, Hindi-Punjabi. Our qualitative and quantitative analysis shows that the retraining strategy helps achieve better alignment as observed by attention heatmap and better phrasal translation, leading to statistically significant improvement in BLEU scores.
false
[]
[]
null
null
null
null
2021
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
jurgens-etal-2014-twitter
https://aclanthology.org/W14-3906.pdf
Twitter Users \#CodeSwitch Hashtags! \#MoltoImportante \#wow
When code switching, individuals incorporate elements of multiple languages into the same utterance. While code switching has been studied extensively in formal and spoken contexts, its behavior and prevalence remains unexamined in many newer forms of electronic communication. The present study examines code switching in Twitter, focusing on instances where an author writes a post in one language and then includes a hashtag in a second language. In the first experiment, we perform a large scale analysis on the languages used in millions of posts to show that authors readily incorporate hashtags from other languages, and in a manual analysis of a subset the hashtags, reveal prolific code switching, with code switching occurring for some hashtags in over twenty languages. In the second experiment, French and English posts from three bilingual cities are analyzed for their code switching frequency and its content.
false
[]
[]
null
null
null
null
2014
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
ilinykh-dobnik-2022-attention
https://aclanthology.org/2022.findings-acl.320.pdf
Attention as Grounding: Exploring Textual and Cross-Modal Attention on Entities and Relations in Language-and-Vision Transformer
We explore how a multi-modal transformer trained for generation of longer image descriptions learns syntactic and semantic representations about entities and relations grounded in objects at the level of masked self-attention (text generation) and crossmodal attention (information fusion). We observe that cross-attention learns the visual grounding of noun phrases into objects and high-level semantic information about spatial relations, while text-to-text attention captures low-level syntactic knowledge between words. This concludes that language models in a multi-modal task learn different semantic information about objects and relations cross-modally and uni-modally (text-only).
false
[]
[]
null
null
null
The research reported in this paper was supported by a grant from the Swedish Research Council (VR project 2014-39) for the establishment of the Centre for Linguistic Theory and Studies in Probability (CLASP) at the University of Gothenburg.
2022
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
hovy-2010-distributional
https://aclanthology.org/W10-3401.pdf
Distributional Semantics and the Lexicon
The lexicons used in computational linguistics systems contain morphological, syntactic, and occasionally also some semantic information (such as definitions, pointers to an ontology, verb frame filler preferences, etc.). But the human cognitive lexicon contains a great deal more, crucially, expectations about how a word tends to combine with others: not just general information-extraction-like patterns, but specific instantial expectations. Such information is very useful when it comes to listening in bad aural conditions and reading texts in which background information is taken for granted; without such specific expectation, one would be hard-pressed (and computers are completely unable) to form coherent and richly connected multi-sentence interpretations. Over the past few years, NLP work has increasingly treated topic signature word distributions (also called 'context vectors', 'topic models', etc.) as a de facto replacement for semantics. Whether the task is wordsense disambiguation, certain forms of textual entailment, information extraction, paraphrase learning, and so on, it turns out to be very useful to consider a word(sense) as being defined by the distribution of word(senses) that regu-larly accompany it (in the classic words of Firth, "you shall know a word by the company it keeps"). And this is true not only for individual wordsenses, but also for larger units such as topics: the product of LDA and similar topic characterization engines is similar.
false
[]
[]
null
null
null
null
2010
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
osenova-etal-2010-exploring
http://www.lrec-conf.org/proceedings/lrec2010/pdf/721_Paper.pdf
Exploring Co-Reference Chains for Concept Annotation of Domain Texts
The paper explores the co-reference chains as a way for improving the density of concept annotation over domain texts. The idea extends authors' previous work on relating the ontology to the text terms in two domains-IT and textile. Here IT domain is used. The challenge is to enhance relations among concepts instead of text entities, the latter pursued in most works. Our ultimate goal is to exploit these additional chains for concept disambiguation as well as sparseness resolution at concept level. First, a gold standard was prepared with manually connected links among concepts, anaphoric pronouns and contextual equivalents. This step was necessary not only for test purposes, but also for better orientation in the co-referent types and distribution. Then, two automatic systems were tested on the gold standard. Note that these systems were not designed specially for concept chaining. The conclusion is that the state-of-the-art co-reference resolution systems might address the concept sparseness problem, but not so much the concept disambiguation task. For the latter, word-sense disambiguation systems have to be integrated.
false
[]
[]
null
null
null
The work reported here is done within the context of the EU project -Language Technology for Lifelong Learning (LTfLL). We would also like to thank the three anonymous reviewers for their valuable remarks as specialists and readers.
2010
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
nallapati-etal-2016-abstractive
https://aclanthology.org/K16-1028.pdf
Abstractive Text Summarization using Sequence-to-sequence RNNs and Beyond
In this work, we model abstractive text summarization using Attentional Encoder-Decoder Recurrent Neural Networks, and show that they achieve state-of-the-art performance on two different corpora. We propose several novel models that address critical problems in summarization that are not adequately modeled by the basic architecture, such as modeling keywords , capturing the hierarchy of sentence-toword structure, and emitting words that are rare or unseen at training time. Our work shows that many of our proposed models contribute to further improvement in performance. We also propose a new dataset consisting of multi-sentence summaries, and establish performance benchmarks for further research.
false
[]
[]
null
null
null
null
2016
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
reinhard-gibbon-1991-prosodic
https://aclanthology.org/E91-1023.pdf
Prosodic Inheritance and Morphological Generalisations
Prosodic Inheritance (PI) morphology provides uniform treatment of both concatenative and non-concatenative morphological and phonological generalisations using default inheritance. Models of an extensive range of German Umlaut and Arabic intercalation facts, implemented in DATR, show that the PI approach also covers 'hard cases' more homogeneously and more extensively than previous computational treatments.
false
[]
[]
null
null
null
null
1991
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
walker-etal-2012-annotated
http://www.lrec-conf.org/proceedings/lrec2012/pdf/1114_Paper.pdf
An Annotated Corpus of Film Dialogue for Learning and Characterizing Character Style
Interactive story systems often involve dialogue with virtual dramatic characters. However, to date most character dialogue is written by hand. One way to ease the authoring process is to (semi-)automatically generate dialogue based on film characters. We extract features from dialogue of film characters in leading roles. Then we use these character-based features to drive our language generator to produce interesting utterances. This paper describes a corpus of film dialogue that we have collected from the IMSDb archive and annotated for linguistic structures and character archetypes. We extract different sets of features using external sources such as LIWC and SentiWordNet as well as using our own written scripts. The automation of feature extraction also eases the process of acquiring additional film scripts. We briefly show how film characters can be represented by models learned from the corpus, how the models can be distinguished based on different categories such as gender and film genre, and how they can be applied to a language generator to generate utterances that can be perceived as being similar to the intended character model.
false
[]
[]
null
null
null
null
2012
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
zhai-huang-2015-pilot
https://aclanthology.org/2015.mtsummit-papers.5.pdf
A pilot study towards end-to-end MT training
Typical MT training involves several stages, including word alignment, rule extraction, translation model estimation, and parameter tuning. In this paper, different from the traditional pipeline, we investigate the possibility of end-to-end MT training, and propose a framework which combines rule induction and parameter tuning in one single module. Preliminary experiments show that our learned model achieves comparable translation quality to the traditional MT training pipeline. * Work done while Prof. Liang Huang was in City University of New York.
false
[]
[]
null
null
null
We thank the three anonymous reviewers for the valuable comments, and Kai Zhao for discussions. This project was supported in part by DARPA FA8750-13-2-0041 (DEFT), NSF IIS-1449278, and a Google Faculty Research Award.
2015
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
kucuk-etal-2014-named
http://www.lrec-conf.org/proceedings/lrec2014/pdf/380_Paper.pdf
Named Entity Recognition on Turkish Tweets
Various recent studies show that the performance of named entity recognition (NER) systems developed for well-formed text types drops significantly when applied to tweets. The only existing study for the highly inflected agglutinative language Turkish reports a drop in F-Measure from 91% to 19% when ported from news articles to tweets. In this study, we present a new named entity-annotated tweet corpus and a detailed analysis of the various tweet-specific linguistic phenomena. We perform comparative NER experiments with a rule-based multilingual NER system adapted to Turkish on three corpora: a news corpus, our new tweet corpus, and another tweet corpus. Based on the analysis and the experimentation results, we suggest system features required to improve NER results for social media like Twitter.
false
[]
[]
null
null
null
This study is supported in part by a postdoctoral research grant from TÜBİTAK.
2014
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
gilbert-riloff-2013-domain
https://aclanthology.org/P13-2015.pdf
Domain-Specific Coreference Resolution with Lexicalized Features
Most coreference resolvers rely heavily on string matching, syntactic properties, and semantic attributes of words, but they lack the ability to make decisions based on individual words. In this paper, we explore the benefits of lexicalized features in the setting of domain-specific coreference resolution. We show that adding lexicalized features to off-the-shelf coreference resolvers yields significant performance gains on four domain-specific data sets and with two types of coreference resolution architectures.
false
[]
[]
null
null
null
This material is based upon work supported by the National Science Foundation under Grant No. IIS-1018314 and the Defense Advanced Research Projects Agency (DARPA) Machine Reading Program under Air Force Research Laboratory (AFRL) prime contract no. FA8750-09-C-0172. Any opinions, findings, and conclusion or recommendations expressed in this material are those of the authors and do not necessarily reflect the view of the DARPA, AFRL, or the U.S. government.
2013
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
alvez-etal-2018-cross
https://aclanthology.org/L18-1723.pdf
Cross-checking WordNet and SUMO Using Meronymy
We report on the practical application of a black-box testing methodology for the validation of the knowledge encoded in WordNet, SUMO and their mapping by using automated theorem provers. Our proposal is based on the part-whole information provided by WordNet, out of which we automatically create a large set of tests. Our experimental results confirm that the proposed system enables the validation of some pieces of information and also the detection of missing information or inconsistencies among these resources.
false
[]
[]
null
null
null
This work has been partially funded by the Spanish Projects TUNER (TIN2015-65308-C5-1-R) and GRAMM (TIN2017-86727-C2-2-R), the Basque Project LoRea (GIU15/30) and the UPV/EHU project OEBU (EHUA16/33).
2018
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
paul-etal-2009-mining
https://aclanthology.org/W09-1111.pdf
Mining the Web for Reciprocal Relationships
In this paper we address the problem of identifying reciprocal relationships in English. In particular we introduce an algorithm that semi-automatically discovers patterns encoding reciprocity based on a set of simple but effective pronoun templates. Using a set of most frequently occurring patterns, we extract pairs of reciprocal pattern instances by searching the web. Then we apply two unsupervised clustering procedures to form meaningful clusters of such reciprocal instances. The pattern discovery procedure yields an accuracy of 97%, while the clustering procedures indicate accuracies of 91% and 82%. Moreover, the resulting set of 10,882 reciprocal instances represent a broad-coverage resource.
true
[]
[]
Partnership for the goals
null
null
null
2009
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
true
hatzivassiloglou-mckeown-1995-quantitative
https://aclanthology.org/P95-1027.pdf
A Quantitative Evaluation of Linguistic Tests for the Automatic Prediction of Semantic Markedness
We present a corpus-based study of methods that have been proposed in the linguistics literature for selecting the semantically unmarked term out of a pair of antonymous adjectives. Solutions to this problem are applicable to the more general task of selecting the positive term from the pair. Using automatically collected data, the accuracy and applicability of each method is quantified, and a statistical analysis of the significance of the results is performed. We show that some simple methods are indeed good indicators for the answer to the problem while other proposed methods fail to perform better than would be attributable to chance. In addition, one of the simplest methods, text frequency, dominates all others. We also apply two generic statistical learning methods for combining the indications of the individual methods, and compare their performance to the simple methods. The most sophisticated complex learning method offers a small, but statistically significant, improvement over the original tests.
false
[]
[]
null
null
null
This work was supported jointly by the Advanced Research Projects Agency and the Office of Naval Research under contract N00014-89-J-1782, and by the National Science Foundation under contract GER-90-24069. It was conducted under the auspices of the Columbia University CAT in High Performance Computing and Communications in Healthcare, a New York State Center for Advanced Technology supported by the New York State Science and Technology Foundation. We wish to thank Judith Klavans, Rebecca Passonneau, and the anonymous reviewers for providing us with useful comments on earlier versions of the paper.
1995
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
bangalore-etal-2006-finite
https://aclanthology.org/2006.iwslt-evaluation.2.pdf
Finite-state transducer-based statistical machine translation using joint probabilities
In this paper, we present our system for statistical machine translation that is based on weighted finite-state transducers. We describe the construction of the transducer, the estimation of the weights, acquisition of phrases (locally ordered tokens) and the mechanism we use for global reordering. We also present a novel approach to machine translation that uses a maximum entropy model for parameter estimation and contrast its performance to the finite-state translation model on the IWSLT Chinese-English data sets.
false
[]
[]
null
null
null
null
2006
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
rohith-ramakrishnan-etal-2021-analysis
https://aclanthology.org/2021.paclic-1.75.pdf
Analysis of Text-Semantics via Efficient Word Embedding using Variational Mode Decomposition
In this paper, we propose a novel method which establishes a newborn relation between Signal Processing and Natural Language Processing (NLP) method via Variational Mode Decomposition (VMD). Unlike the modern Neural Network approaches for NLP which are complex and often masked from the end user, our approach involving Term Frequency-Inverse Document Frequency (TF-IDF) aided with VMD dials down the complexity retaining the performance with transparency. The performance in terms of Machine Learning based approaches and semantic relationships of words along with the methodology of the above mentioned approach is analyzed and discussed in this paper.
false
[]
[]
null
null
null
null
2021
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
di-eugenio-glass-2004-squibs
https://aclanthology.org/J04-1005.pdf
Squibs and Discussions: The Kappa Statistic: A Second Look
In recent years, the kappa coefficient of agreement has become the de facto standard for evaluating intercoder agreement for tagging tasks. In this squib, we highlight issues that affect κ and that the community has largely neglected. First, we discuss the assumptions underlying different computations of the expected agreement component of κ. Second, we discuss how prevalence and bias affect the κ measure.
false
[]
[]
null
null
null
This work is supported by grant N00014-00-1-0640 from the Office of Naval Research. Thanks to Janet Cahn and to the anonymous reviewers for comments on earlier drafts.
2004
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
le-roux-etal-2013-combining
https://aclanthology.org/D13-1116.pdf
Combining PCFG-LA Models with Dual Decomposition: A Case Study with Function Labels and Binarization
It has recently been shown that different NLP models can be effectively combined using dual decomposition. In this paper we demonstrate that PCFG-LA parsing models are suitable for combination in this way. We experiment with the different models which result from alternative methods of extracting a grammar from a treebank (retaining or discarding function labels, left binarization versus right binarization) and achieve a labeled Parseval F-score of 92.4 on Wall Street Journal Section 23-this represents an absolute improvement of 0.7 and an error reduction rate of 7% over a strong PCFG-LA product-model baseline. Although we experiment only with binarization and function labels in this study, there is much scope for applying this approach to other grammar extraction strategies.
false
[]
[]
null
null
null
We are grateful to the reviewers for their helpful comments. We also thank Joachim Wagner for providing feedback on an early version of the paper. This work has been partially funded by the Labex EFL (ANR/CGI). 9 Their other system relying on the self-trained version of the BLLIP parser achieves 92.6 F1. ACL-08: HLT, pages 586-594.
2013
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
marie-fujita-2019-unsupervised-joint
https://aclanthology.org/P19-1312.pdf
Unsupervised Joint Training of Bilingual Word Embeddings
State-of-the-art methods for unsupervised bilingual word embeddings (BWE) train a mapping function that maps pre-trained monolingual word embeddings into a bilingual space. Despite its remarkable results, unsupervised mapping is also well-known to be limited by the dissimilarity between the original word embedding spaces to be mapped. In this work, we propose a new approach that trains unsupervised BWE jointly on synthetic parallel data generated through unsupervised machine translation. We demonstrate that existing algorithms that jointly train BWE are very robust to noisy training data and show that unsupervised BWE jointly trained significantly outperform unsupervised mapped BWE in several cross-lingual NLP tasks.
false
[]
[]
null
null
null
We would like to thank the reviewers for their useful comments and suggestions. A part of this work was conducted under the program "Promotion of Global Communications Plan: Research, Development, and Social Demonstration of Multilingual Speech Translation Technology" of the Ministry of Internal Affairs and Communications (MIC), Japan. 13 We used the News Commentary corpora provided by WMT for en→de and en→fr to train SMT systems performing at 15.4 and 20.1 BLEU points on Newstest2016 en-de and Newstest2014 en-fr, respectively.
2019
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
kuo-etal-2012-exploiting
https://aclanthology.org/P12-2067.pdf
Exploiting Latent Information to Predict Diffusions of Novel Topics on Social Networks
This paper brings a marriage of two seemly unrelated topics, natural language processing (NLP) and social network analysis (SNA). We propose a new task in SNA which is to predict the diffusion of a new topic, and design a learning-based framework to solve this problem. We exploit the latent semantic information among users, topics, and social connections as features for prediction. Our framework is evaluated on real data collected from public domain. The experiments show 16% AUC
true
[]
[]
Peace, Justice and Strong Institutions
null
null
This work was also supported by National Science Council, National Taiwan University and Intel Corporation under Grants NSC 100-2911-I-002-001, and 101R7501.
2012
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
true
false
druck-etal-2009-active
https://aclanthology.org/D09-1009.pdf
Active Learning by Labeling Features
Methods that learn from prior information about input features such as generalized expectation (GE) have been used to train accurate models with very little effort. In this paper, we propose an active learning approach in which the machine solicits "labels" on features rather than instances. In both simulated and real user experiments on two sequence labeling tasks we show that our active learning method outperforms passive learning with features as well as traditional active learning with instances. Preliminary experiments suggest that novel interfaces which intelligently solicit labels on multiple features facilitate more efficient annotation.
false
[]
[]
null
null
null
We thank Kedar Bellare for helpful discussions and Gau-
2009
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
wang-etal-2021-enpar
https://aclanthology.org/2021.eacl-main.251.pdf
ENPAR:Enhancing Entity and Entity Pair Representations for Joint Entity Relation Extraction
Current state-of-the-art systems for joint entity relation extraction (Luan et al., 2019; Wadden et al., 2019) usually adopt the multi-task learning framework. However, annotations for these additional tasks such as coreference resolution and event extraction are always equally hard (or even harder) to obtain. In this work, we propose a pre-training method ENPAR to improve the joint extraction performance. EN-PAR requires only the additional entity annotations that are much easier to collect. Unlike most existing works that only consider incorporating entity information into the sentence encoder, we further utilize the entity pair information. Specifically, we devise four novel objectives, i.e., masked entity typing, masked entity prediction, adversarial context discrimination, and permutation prediction, to pretrain an entity encoder and an entity pair encoder. Comprehensive experiments show that the proposed pre-training method achieves significant improvement over BERT on ACE05, SciERC, and NYT, and outperforms current state-of-the-art on ACE05.
false
[]
[]
null
null
null
The authors wish to thank the reviewers for their helpful comments and suggestions. This research is (partially) supported by NSFC (62076097
2021
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
ling-etal-2015-contexts
https://aclanthology.org/D15-1161.pdf
Not All Contexts Are Created Equal: Better Word Representations with Variable Attention
We introduce an extension to the bag-ofwords model for learning words representations that take into account both syntactic and semantic properties within language. This is done by employing an attention model that finds within the contextual words, the words that are relevant for each prediction. The general intuition of our model is that some words are only relevant for predicting local context (e.g. function words), while other words are more suited for determining global context, such as the topic of the document. Experiments performed on both semantically and syntactically oriented tasks show gains using our model over the existing bag of words model. Furthermore, compared to other more sophisticated models, our model scales better as we increase the size of the context of the model.
false
[]
[]
null
null
null
The PhD thesis of Wang Ling is supported by FCT grant SFRH/BD/51157/2010. This research was supported in part by the U.S. Army Research Laboratory, the U.S. Army Research Office under contract/grant number W911NF-10-1-0533 and NSF IIS-1054319 and FCT through the plurianual contract UID/CEC/50021/2013 and grant number SFRH/BPD/68428/2010.
2015
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
xu-etal-2002-study
https://aclanthology.org/P02-1025.pdf
A Study on Richer Syntactic Dependencies for Structured Language Modeling
We study the impact of richer syntactic dependencies on the performance of the structured language model (SLM) along three dimensions: parsing accuracy (LP/LR), perplexity (PPL) and worderror-rate (WER, N-best re-scoring). We show that our models achieve an improvement in LP/LR, PPL and/or WER over the reported baseline results using the SLM on the UPenn Treebank and Wall Street Journal (WSJ) corpora, respectively. Analysis of parsing performance shows correlation between the quality of the parser (as measured by precision/recall) and the language model performance (PPL and WER). A remarkable fact is that the enriched SLM outperforms the baseline 3-gram model in terms of WER by 10% when used in isolation as a second pass (N-best re-scoring) language model.
false
[]
[]
null
null
null
null
2002
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
barron-cedeno-etal-2016-convkn
https://aclanthology.org/S16-1138.pdf
ConvKN at SemEval-2016 Task 3: Answer and Question Selection for Question Answering on Arabic and English Fora
We describe our system, ConvKN, participating to the SemEval-2016 Task 3 "Community Question Answering". The task targeted the reranking of questions and comments in real-life web fora both in English and Arabic. ConvKN combines convolutional tree kernels with convolutional neural networks and additional manually designed features including text similarity and thread specific features. For the first time, we applied tree kernels to syntactic trees of Arabic sentences for a reranking task. Our approaches obtained the second best results in three out of four tasks. The only task we performed averagely is the one where we did not use tree kernels in our classifier.
false
[]
[]
null
null
null
This research is developed by the Arabic Language Technologies (ALT) group at the Qatar Computing Research Institute (QCRI), HBKU, Qatar Foundation in collaboration with MIT. It is part of the Interactive sYstems for Answer Search (IYAS) project. This work has been partially supported by the EC project CogNet, 671625 (H2020-ICT-2014-2, Research and Innovation action) and by an IBM Faculty Award.
2016
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
schwenk-2012-continuous
https://aclanthology.org/C12-2104.pdf
Continuous Space Translation Models for Phrase-Based Statistical Machine Translation
This paper presents a new approach to perform the estimation of the translation model probabilities of a phrase-based statistical machine translation system. We use neural networks to directly learn the translation probability of phrase pairs using continuous representations. The system can be easily trained on the same data used to build standard phrase-based systems. We provide experimental evidence that the approach seems to be able to infer meaningful translation probabilities for phrase pairs not seen in the training data, or even predict a list of the most likely translations given a source phrase. The approach can be used to rescore n-best lists, but we also discuss an integration into the Moses decoder. A preliminary evaluation on the English/French IWSLT task achieved improvements in the BLEU score and a human analysis showed that the new model often chooses semantically better translations. Several extensions of this work are discussed.
false
[]
[]
null
null
null
This work was partially financed by the French government (COSMAT, ANR-09-CORD-004), the European Commission (MATECAT, ICT-2011.4.2 -287688) and the DARPA BOLT project.
2012
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
plank-etal-2016-multilingual
https://aclanthology.org/P16-2067.pdf
Multilingual Part-of-Speech Tagging with Bidirectional Long Short-Term Memory Models and Auxiliary Loss
Bidirectional long short-term memory (bi-LSTM) networks have recently proven successful for various NLP sequence modeling tasks, but little is known about their reliance to input representations, target languages, data set size, and label noise. We address these issues and evaluate bi-LSTMs with word, character, and unicode byte embeddings for POS tagging. We compare bi-LSTMs to traditional POS taggers across languages and data sizes. We also present a novel bi-LSTM model, which combines the POS tagging loss function with an auxiliary loss function that accounts for rare words. The model obtains state-of-the-art performance across 22 languages, and works especially well for morphologically complex languages. Our analysis suggests that bi-LSTMs are less sensitive to training data size and label corruptions (at small noise levels) than previously assumed.
false
[]
[]
null
null
null
We thank the anonymous reviewers for their feedback. AS is funded by the ERC Starting Grant LOWLANDS No. 313695. YG is supported by The Israeli Science Foundation (grant number 1555/15) and a Google Research Award.
2016
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
jiang-etal-2021-lnn
https://aclanthology.org/2021.acl-long.64.pdf
LNN-EL: A Neuro-Symbolic Approach to Short-text Entity Linking
Entity linking (EL), the task of disambiguating mentions in text by linking them to entities in a knowledge graph, is crucial for text understanding, question answering or conversational systems. Entity linking on short text (e.g., single sentence or question) poses particular challenges due to limited context. While prior approaches use either heuristics or blackbox neural methods, here we propose LNN-EL, a neuro-symbolic approach that combines the advantages of using interpretable rules based on first-order logic with the performance of neural learning. Even though constrained to using rules, LNN-EL performs competitively against SotA black-box neural approaches, with the added benefits of extensibility and transferability. In particular, we show that we can easily blend existing rule templates given by a human expert, with multiple types of features (priors, BERT encodings, box embeddings, etc), and even scores resulting from previous EL methods, thus improving on such methods. For instance, on the LC-QuAD-1.0 dataset, we show more than 4% increase in F1 score over previous SotA. Finally, we show that the inductive bias offered by using logic results in learned rules that transfer well across datasets, even without fine tuning, while maintaining high accuracy. * Equal contribution; Author Hang Jiang did this work while interning at IBM.
false
[]
[]
null
null
null
We thank Ibrahim Abdelaziz, Pavan Kapanipathi, Srinivas Ravishankar, Berthold Reinwald, Salim Roukos and anonymous reviewers for their valuable inputs and feedback.
2021
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
jimenez-lopez-becerra-bonache-2016-machine
https://aclanthology.org/W16-4101.pdf
Could Machine Learning Shed Light on Natural Language Complexity?
In this paper, we propose to use a subfield of machine learning-grammatical inference-to measure linguistic complexity from a developmental point of view. We focus on relative complexity by considering a child learner in the process of first language acquisition. The relevance of grammatical inference models for measuring linguistic complexity from a developmental point of view is based on the fact that algorithms proposed in this area can be considered computational models for studying first language acquisition. Even though it will be possible to use different techniques from the field of machine learning as computational models for dealing with linguistic complexity-since in any model we have algorithms that can learn from data-, we claim that grammatical inference models offer some advantages over other tools.
false
[]
[]
null
null
null
This research has been supported by the Ministerio de Economía y Competitividad under the project number FFI2015-69978-P (MINECO/FEDER) of the Programa Estatal de Fomento de la Investigación Científica y Técnica de Excelencia, Subprograma Estatal de Generación de Conocimiento.
2016
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
song-etal-2010-active
https://aclanthology.org/W10-4121.pdf
Active Learning Based Corpus Annotation
Opinion Mining aims to automatically acquire useful opinioned information and knowledge in subjective texts. Research of Chinese Opinioned Mining requires the support of annotated corpus for Chinese opinioned-subjective texts. To facilitate the work of corpus annotators, this paper implements an active learning based annotation tool for Chinese opinioned elements which can identify topic, sentiment, and opinion holder in a sentence automatically.
false
[]
[]
null
null
null
The author of this paper would like to thank Information Retrieval Lab, Harbin Institute of Technology for providing the tool (LTP) used in experiments. This research was supported by National Natural Science Foundation of China Grant No.60773087.
2010
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
rello-basterrechea-2010-automatic
https://aclanthology.org/W10-0301.pdf
Automatic conjugation and identification of regular and irregular verb neologisms in Spanish
In this paper, a novel system for the automatic identification and conjugation of Spanish verb neologisms is presented. The paper describes a rule-based algorithm consisting of six steps which are taken to determine whether a new verb is regular or not, and to establish the rules that the verb should follow in its conjugation. The method was evaluated on 4,307 new verbs and its performance found to be satisfactory both for irregular and regular neologisms. The algorithm also contains extra rules to cater for verb neologisms in Spanish that do not exist as yet, but are inferred to be possible in light of existing cases of new verb creation in Spanish.
false
[]
[]
null
null
null
We would like to express or gratitude to the Molino de Ideas s.a. engineering team who have successfully implemented the method, specially to Daniel Ayuso de Santos and Alejandro de Pablos López.
2010
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
saint-dizier-2016-argument
https://aclanthology.org/L16-1156.pdf
Argument Mining: the Bottleneck of Knowledge and Language Resources
Given a controversial issue, argument mining from natural language texts (news papers, and any form of text on the Internet) is extremely challenging: domain knowledge is often required together with appropriate forms of inferences to identify arguments. This contribution explores the types of knowledge that are required and how they can be paired with reasoning schemes, language processing and language resources to accurately mine arguments. We show via corpus analysis that the Generative Lexicon, enhanced in different manners and viewed as both a lexicon and a domain knowledge representation, is a relevant approach. In this paper, corpus annotation for argument mining is first developed, then we show how the generative lexicon approach must be adapted and how it can be paired with language processing patterns to extract and specify the nature of arguments. Our approach to argument mining is thus knowledge driven
false
[]
[]
null
null
null
null
2016
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
pasca-2015-interpreting
https://aclanthology.org/N15-1037.pdf
Interpreting Compound Noun Phrases Using Web Search Queries
A weakly-supervised method is applied to anonymized queries to extract lexical interpretations of compound noun phrases (e.g., "fortune 500 companies"). The interpretations explain the subsuming role ("listed in") that modifiers (fortune 500) play relative to heads (companies) within the noun phrases. Experimental results over evaluation sets of noun phrases from multiple sources demonstrate that interpretations extracted from queries have encouraging coverage and precision. The top interpretation extracted is deemed relevant for more than 70% of the noun phrases.
false
[]
[]
null
null
null
The paper benefits from comments from Jutta Degener, Mihai Surdeanu and Susanne Riehemann. Data extracted by Haixun Wang and Jian Li is the source of the IsA vocabulary of noun phrases used in the evaluation.
2015
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
haffari-etal-2011-ensemble
https://aclanthology.org/P11-2125.pdf
An Ensemble Model that Combines Syntactic and Semantic Clustering for Discriminative Dependency Parsing
We combine multiple word representations based on semantic clusters extracted from the (Brown et al., 1992) algorithm and syntactic clusters obtained from the Berkeley parser (Petrov et al., 2006) in order to improve discriminative dependency parsing in the MST-Parser framework (McDonald et al., 2005). We also provide an ensemble method for combining diverse cluster-based models. The two contributions together significantly improves unlabeled dependency accuracy from 90.82% to 92.13%.
false
[]
[]
null
null
null
This research was partially supported by NSERC, Canada (RGPIN: 264905). We would like to thank Terry Koo for his help with the cluster-based features for dependency parsing and Ryan McDonald for the MSTParser source code which we modified and used for the experiments in this paper.
2011
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
chung-2005-market
https://aclanthology.org/Y05-1007.pdf
MARKET Metaphors: Chinese, English and Malay
In this paper, MARKET metaphors used by different communities (Chinese, Malay and English) are laid out based on the frequency counts of these metaphors and their occurrences in different syntactic positions. The results show that certain types of metaphors have preferences for different syntactic positions for 'market.' For instance, MARKET IS A PERSON in all three languages prefers to place 'market' in the subject position. In addition to this finding, the choice of metaphor types by different speech communities may also reflect their perspectives regarding their country's economy. This is evidenced by the fewer instances of MARKET IS COMPETITION in the English data. The instances that describe how the market falls (plunges and crashes) may reflect the speakers' concerns with the maintenance of their power in the market rather than the competitiveness of their market. Therefore, through using quantitative data, this paper is able to infer the economic status of these speech communities. This can be done not only through analyzing the semantic meanings of the metaphors but also their interface with syntax.
false
[]
[]
null
null
null
null
2005
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
gliwa-etal-2019-samsum
https://aclanthology.org/D19-5409.pdf
SAMSum Corpus: A Human-annotated Dialogue Dataset for Abstractive Summarization
This paper introduces the SAMSum Corpus, a new dataset with abstractive dialogue summaries. We investigate the challenges it poses for automated summarization by testing several models and comparing their results with those obtained on a corpus of news articles. We show that model-generated summaries of dialogues achieve higher ROUGE scores than the model-generated summaries of news-in contrast with human evaluators' judgement. This suggests that a challenging task of abstractive dialogue summarization requires dedicated models and non-standard quality measures. To our knowledge, our study is the first attempt to introduce a high-quality chatdialogues corpus, manually annotated with abstractive summarizations, which can be used by the research community for further studies.
false
[]
[]
null
null
null
We would like to express our sincere thanks to Tunia Błachno, Oliwia Ebebenge, Monika Jędras and Małgorzata Krawentek for their huge contribution to the corpus collection -without their ideas, management of the linguistic task and verification of examples we would not be able to create this paper. We are also grateful for the reviewers' helpful comments and suggestions.
2019
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
klafka-ettinger-2020-spying
https://aclanthology.org/2020.acl-main.434.pdf
Spying on Your Neighbors: Fine-grained Probing of Contextual Embeddings for Information about Surrounding Words
Although models using contextual word embeddings have achieved state-of-the-art results on a host of NLP tasks, little is known about exactly what information these embeddings encode about the context words that they are understood to reflect. To address this question, we introduce a suite of probing tasks that enable fine-grained testing of contextual embeddings for encoding of information about surrounding words. We apply these tasks to examine the popular BERT, ELMo and GPT contextual encoders, and find that each of our tested information types is indeed encoded as contextual information across tokens, often with nearperfect recoverability-but the encoders vary in which features they distribute to which tokens, how nuanced their distributions are, and how robust the encoding of each feature is to distance. We discuss implications of these results for how different types of models break down and prioritize word-level context information when constructing token embeddings.
false
[]
[]
null
null
null
We would like to thank Itamar Francez and Sam Wiseman for helpful discussion, and anonymous reviewers for their valuable feedback. This material is based upon work supported by the National Science Foundation under Award No. 1941160.
2020
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
huang-etal-2022-distilling
https://aclanthology.org/2022.fever-1.3.pdf
Distilling Salient Reviews with Zero Labels
Many people read online reviews to learn about real-world entities of their interest. However, majority of reviews only describes general experiences and opinions of the customers, and may not reveal facts that are specific to the entity being reviewed. In this work, we focus on a novel task of mining from a review corpus sentences that are unique for each entity. We refer to this task as Salient Fact Extraction. Salient facts are extremely scarce due to their very nature. Consequently, collecting labeled examples for training supervised models is tedious and cost-prohibitive. To alleviate this scarcity problem, we develop an unsupervised method ZL-Distiller, which leverages contextual language representations of the reviews and their distributional patterns to identify salient sentences about entities. Our experiments on multiple domains (hotels, products, and restaurants) show that ZL-Distiller achieves state-of-theart performance and further boosts the performance of other supervised/unsupervised algorithms for the task. Furthermore, we show that salient sentences mined by ZL-Distiller provide unique and detailed information about entities, which benefit downstream NLP applications including question answering and summarization. * Work done during internship at Megagon Labs. † Work done while at Megagon Labs. The Fifth Workshop on Fact Extraction and VERification (FEVER). Co-located with Association for Computational Linguistics 2022.
false
[]
[]
null
null
null
null
2022
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
lowe-etal-1994-language
https://aclanthology.org/H94-1087.pdf
Language Identification via Large Vocabulary Speaker Independent Continuous Speech Recognition
The goal of this study is to evaluate the potential for using large vocabulary continuous speech recognition as an engine for automatically classifying utterances according to the language being spoken. The problem of language identification is often thought of as being separate from the problem of speech recognition. But in this paper, as in Dragon's earlier work on topic and speaker identification, we explore a unifying approach to all three message classification problems based on the underlying stochastic process which gives rise to speech. We discuss the theoretical framework upon which our message classification systems are built and report on a series of experiments in which this theory is tested, using large vocabulary continuous speech recognition to distinguish English from Spanish.
false
[]
[]
null
null
null
null
1994
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
carroll-etal-2000-engineering
https://aclanthology.org/W00-2007.pdf
Engineering a Wide-Coverage Lexicalized Grammar
We discuss a number of practical issues that have arisen in the development of a wide-coverage lexicalized grammar for English. In particular, we consider the way in which the design of the •~rammar and of its encoding was infiuenced by issues relating to the size of the grammar.
false
[]
[]
null
null
null
null
2000
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
penkale-2013-tailor
https://aclanthology.org/2013.tc-1.13.pdf
Tailor-made quality-controlled translation
Traditional 'one-size-fits-all' models are failing to meet businesses' requirements. To support the growing demand for cost-effective translation, fine-grained control of quality is required, enabling fit-for-purpose content to be delivered at predictable quality and cost levels. This paper argues for customisable levels of quality, detailing the variables which can be altered to achieve a certain level of quality, and showing how this model can be implemented within Lingo24's Coach translation platform.
false
[]
[]
null
null
null
null
2013
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
patrick-li-2009-cascade
https://aclanthology.org/U09-1014.pdf
A Cascade Approach to Extracting Medication Events
Information Extraction, from the electronic clinical record is a comparatively new topic for computational linguists. In order to utilize the records to improve the efficiency and quality of health care, the knowledge content should be automatically encoded; however this poses a number of challenges for Natural Language Processing (NLP). In this paper, we present a cascade approach to discover the medicationrelated information (MEDICATION, DOSAGE, MODE, FREQUENCY, DURATION, REASON, and CONTEXT) from narrative patient records. The prototype of this system was used to participate the i2b2 2009 medication extraction challenge. The results show better than 90% accuracy on 5 out of 7 entities used in the study.
true
[]
[]
Good Health and Well-Being
null
null
We would like to acknowledge the contribution of Stephen Crawshaw, Yefeng Wang and other members in the Health Information Technologies Research Laboratory.Deidentified clinical records used in this research were provided by the i2b2 National Center for Biomedical Computing funded by U54LM008748 and were originally prepared for the Shared Tasks for Challenges in NLP for Clinical Data organized by Dr. Ozlem Uzuner, i2b2 and SUNY.
2009
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
false
brook-weiss-etal-2021-qa
https://aclanthology.org/2021.emnlp-main.778.pdf
QA-Align: Representing Cross-Text Content Overlap by Aligning Question-Answer Propositions
Multi-text applications, such as multidocument summarization, are typically required to model redundancies across related texts. Current methods confronting consolidation struggle to fuse overlapping information. In order to explicitly represent content overlap, we propose to align predicate-argument relations across texts, providing a potential scaffold for information consolidation. We go beyond clustering coreferring mentions, and instead model overlap with respect to redundancy at a propositional level, rather than merely detecting shared referents. Our setting exploits QA-SRL, utilizing question-answer pairs to capture predicate-argument relations, facilitating laymen annotation of cross-text alignments. We employ crowd-workers for constructing a dataset of QA-based alignments, and present a baseline QA alignment model trained over our dataset. Analyses show that our new task is semantically challenging, capturing content overlap beyond lexical similarity and complements cross-document coreference with proposition-level links, offering potential use for downstream tasks.
false
[]
[]
null
null
null
We would like to thank the anonymous reviewers for their thorough and insightful comments. The work described herein was supported in part by grants from Intel Labs, Facebook, and the Israel Science Foundation grant 1951/17.
2021
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
jappinen-etal-1988-locally
https://aclanthology.org/C88-1056.pdf
Locally Governed Trees and Dependecncy Parsing
~ paper desc[J.~s the notion of ]pcall.y gove~:ned t~:ees as a n<x]el of sttuCtu17ally [estrfcted dependency st~:uctures of sentenc.eSo 2~n abstract umchine and its supporting softwa~:e to*. the building of local ly goqerned t~:ees is intr_(~iuced. The rest of the paper dis~:usse,q [.ew uuaM]iguous~ ~<-'].]-for,~d local]y governed i:~:ees can be parsed ill l.i[~ea~: tia~ ~~en cxertain ~'tructural ~x~nstr~int's a~e in fuzce o
false
[]
[]
null
null
null
null
1988
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
zhang-etal-2016-jate
https://aclanthology.org/L16-1359.pdf
JATE 2.0: Java Automatic Term Extraction with Apache Solr
Automatic Term Extraction (ATE) or Recognition (ATR) is a fundamental processing step preceding many complex knowledge engineering tasks. However, few methods have been implemented as public tools and in particular, available as open-source freeware. Further, little effort is made to develop an adaptable and scalable framework that enables customization, development, and comparison of algorithms under a uniform environment. This paper introduces JATE 2.0, a complete remake of the free Java Automatic Term Extraction Toolkit (Zhang et al., 2008) delivering new features including: (1) highly modular, adaptable and scalable ATE thanks to integration with Apache Solr, the open source free-text indexing and search platform; (2) an extended collection of state-of-the-art algorithms. We carry out experiments on two well-known benchmarking datasets and compare the algorithms along the dimensions of effectiveness (precision) and efficiency (speed and memory consumption). To the best of our knowledge, this is by far the only free ATE library offering a flexible architecture and the most comprehensive collection of algorithms.
false
[]
[]
null
null
null
null
2016
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
salaberri-etal-2015-brol
https://aclanthology.org/R15-1072.pdf
bRol: The Parser of Syntactic and Semantic Dependencies for Basque
This paper presents bRol, the first fully automatic system to be developed for the parsing of syntactic and semantic dependencies in Basque. The parser has been built according to the settings established for the CoNLL-2009 Shared Task (Hajič et al., 2009), therefore, bRol can be thought of as a standard parser with scores comparable to the ones reported in the shared task. A second-order graphbased MATE parser has been used as the syntactic dependency parser. The semantic model, on the other hand, uses the traditional four-stage SRL pipeline. The system has a labeled attachment score of 80.51%, a labeled semantic F 1 of 75.10, and a labeled macro F 1 of 77.80.
false
[]
[]
null
null
null
Haritz Salaberri holds a PhD grant from the University of the Basque Country. In addition, this work has been supported by the EXTRECM project (Grant No. TIN2013-46616-C2-1-R) and IXA Group, research group of type A (2010-2015)(IT34410).
2015
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
darwish-etal-2017-arabic
https://aclanthology.org/W17-1302.pdf
Arabic Diacritization: Stats, Rules, and Hacks
In this paper, we present a new and fast state-of-the-art Arabic diacritizer that guesses the diacritics of words and then their case endings. We employ a Viterbi decoder at word-level with back-off to stem, morphological patterns, and transliteration and sequence labeling based diacritization of named entities. For case endings, we use Support Vector Machine (SVM) based ranking coupled with morphological patterns and linguistic rules to properly guess case endings. We achieve a low word level diacritization error of 3.29% and 12.77% without and with case endings respectively on a new multi-genre free of copyright test set. We are making the diacritizer available for free for research purposes.
false
[]
[]
null
null
null
null
2017
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
hovy-2015-demographic
https://aclanthology.org/P15-1073.pdf
Demographic Factors Improve Classification Performance
Extra-linguistic factors influence language use, and are accounted for by speakers and listeners. Most natural language processing (NLP) tasks to date, however, treat language as uniform. This assumption can harm performance. We investigate the effect of including demographic information on performance in a variety of text-classification tasks. We find that by including age or gender information, we consistently and significantly improve performance over demographic-agnostic models. These results hold across three text-classification tasks in five languages.
false
[]
[]
null
null
null
Thanks toŽeljko Agić, David Bamman, Jacob Eisenstein, Stephan Gouws, Anders Johannsen, Barbara Plank, Anders Søgaard, and Svitlana Volkova for their invaluable feedback, as well as to the anonymous reviewers, whose comments helped improve the paper. The author was supported under ERC Starting Grant LOWLANDS No. 313695.
2015
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
acl-1987-association
https://aclanthology.org/P87-1000.pdf
25th Annual Meeting of the Association for Computational Linguistics
The Twenty-Fifth Annual Meeting of the Association for Computational Linguistics offers the membership a chance to acknowledge and benefit from the wide range of developments in computational linguistics in the past several years. The papers in the program reflect the growing interaction between computational linguists trained in information processing approaches and those trained in linguistic disciplines. At the same time the program's papers report on new developments in many areas of computational linguistics that are now the forefront of research in our field. Because of the exponentially increasing number of papers submitted this year, the committee chose to reduce the number of panels and invited talks in favor of accepting additional papers.
false
[]
[]
null
null
null
null
1987
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
klein-nabi-2021-attention-based
https://aclanthology.org/2021.findings-emnlp.208.pdf
Attention-based Contrastive Learning for Winograd Schemas
Self-supervised learning has recently attracted considerable attention in the NLP community for its ability to learn discriminative features using a contrastive objective (Qu et al., 2020; Klein and Nabi, 2020). This paper investigates whether contrastive learning can be extended to Transfomer attention to tackling the Winograd Schema Challenge. To this end, we propose a novel self-supervised framework, leveraging a contrastive loss directly at the level of self-attention. Experimental analysis of our attention-based models on multiple datasets demonstrates superior commonsense reasoning capabilities. The proposed approach outperforms all comparable unsupervised approaches while occasionally surpassing supervised ones. 1
false
[]
[]
null
null
null
null
2021
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
pardo-etal-2010-computational
https://aclanthology.org/W10-1601.pdf
Computational Linguistics in Brazil: An Overview
In this paper we give an overview of Computational Linguistics / Natural Language Processing in Brazil, describing the general research scenario, the main research groups, existing events and journals, and the perceived challenges, among other relevant information. We also identify opportunities for collaboration.
false
[]
[]
null
null
null
The authors are grateful to SBC, CEPLN, FAPESP, and CAPES for supporting this work and the realization of STIL 2009, where part of the data shown in this paper was presented.
2010
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
zhou-huang-2019-towards
https://aclanthology.org/W19-8661.pdf
Towards Generating Math Word Problems from Equations and Topics
A math word problem is a narrative with a specific topic that provides clues to the correct equation with numerical quantities and variables therein. In this paper, we focus on the task of generating math word problems. Previous works are mainly templatebased with pre-defined rules. We propose a novel neural network model to generate math word problems from the given equations and topics. First, we design a fusion mechanism to incorporate the information of both equations and topics. Second, an entity-enforced loss is introduced to ensure the relevance between the generated math problem and the equation. Automatic evaluation results show that the proposed model significantly outperforms the baseline models. In human evaluations, the math word problems generated by our model are rated as being more relevant (in terms of solvability of the given equations and relevance to topics) and natural (i.e., grammaticality, fluency) than the baseline models.
true
[]
[]
Quality Education
null
null
We would like to thank the annotators for their efforts in the evaluation process. Thanks to the anonymous reviewers for their helpful comments and suggestions.
2019
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
green-2018-proposed
https://aclanthology.org/W18-5213.pdf
Proposed Method for Annotation of Scientific Arguments in Terms of Semantic Relations and Argument Schemes
This paper presents a proposed method for annotation of scientific arguments in biological/biomedical journal articles. Semantic entities and relations are used to represent the propositional content of arguments in instances of argument schemes. We describe an experiment in which we encoded the arguments in a journal article to identify issues in this approach. Our catalogue of argument schemes and a copy of the annotated article are now publically available.
true
[]
[]
Industry, Innovation and Infrastructure
null
null
The analysis of the CRAFT article was done with the help of Michael Branon and Bishwa Giri, who were supported by a UNCG 2016 Summer Faculty Excellence Research Grant.
2018
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
sui-etal-2000-information
https://aclanthology.org/P00-1060.pdf
An Information-Theory-Based Feature Type Analysis for the Modeling of Statistical Parsing
The paper proposes an information-theorybased method for feature types analysis in probabilistic evaluation modelling for statistical parsing. The basic idea is that we use entropy and conditional entropy to measure whether a feature type grasps some of the information for syntactic structure prediction. Our experiment quantitatively analyzes several feature types' power for syntactic structure prediction and draws a series of interesting conclusions.
false
[]
[]
null
null
null
null
2000
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
takeshita-etal-2020-existing
https://aclanthology.org/2020.gebnlp-1.5.pdf
Can Existing Methods Debias Languages Other than English? First Attempt to Analyze and Mitigate Japanese Word Embeddings
It is known that word embeddings exhibit biases inherited from the corpus, and those biases reflect social stereotypes. Recently, many studies have been conducted to analyze and mitigate biases in word embeddings. Unsupervised Bias Enumeration (UBE) (Swinger et al., 2019) is one of approach to analyze biases for English, and Hard Debias (Bolukbasi et al., 2016) is the common technique to mitigate gender bias. These methods focused on English, or, in smaller extent, on Indo-European languages. However, it is not clear whether these methods can be generalized to other languages. In this paper, we apply these analyzing and mitigating methods, UBE and Hard Debias, to Japanese word embeddings. Additionally, we examine whether these methods can be used for Japanese. We experimentally show that UBE and Hard Debias cannot be sufficiently adapted to Japanese embeddings.
false
[]
[]
null
null
null
null
2020
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
golding-schabes-1996-combining
https://aclanthology.org/P96-1010.pdf
Combining Trigram-Based and Feature-Based Methods for Context-Sensitive Spelling Correction
This paper addresses the problem of correcting spelling errors that result in valid, though unintended words (such as peace and piece, or quiet and quite) and also the problem of correcting particular word usage errors (such as amount and number, or among and between). Such corrections require contextual information and are not handled by conventional spelling programs such as Unix spell. First, we introduce a method called Trigrams that uses part-of-speech trigrams to encode the context. This method uses a small number of parameters compared to previous methods based on word trigrams. However, it is effectively unable to distinguish among words that have the same part of speech. For this case, an alternative feature-based method called Bayes performs better; but Bayes is less effective than Trigrams when the distinction among words depends on syntactic constraints. A hybrid method called Tribayes is then introduced that combines the best of the previous two methods. The improvement in performance of Tribayes over its components is verified experimentally. Tribayes is also compared with the grammar checker in Microsoft Word, and is found to have substantially higher performance.
false
[]
[]
null
null
null
null
1996
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
shnarch-etal-2013-plis
https://aclanthology.org/P13-4017.pdf
PLIS: a Probabilistic Lexical Inference System
This paper presents PLIS, an open source Probabilistic Lexical Inference System which combines two functionalities: (i) a tool for integrating lexical inference knowledge from diverse resources, and (ii) a framework for scoring textual inferences based on the integrated knowledge. We provide PLIS with two probabilistic implementation of this framework. PLIS is available for download and developers of text processing applications can use it as an off-the-shelf component for injecting lexical knowledge into their applications. PLIS is easily configurable, components can be extended or replaced with user generated ones to enable system customization and further research. PLIS includes an online interactive viewer, which is a powerful tool for investigating lexical inference processes.
false
[]
[]
null
null
null
The authors thank Eden Erez for his help with the interactive viewer and Miquel Esplà Gomis for the bilingual dictionaries. This work was partially supported by the European Community's 7 th Framework Programme (FP7/2007-2013) under grant agreement no. 287923 (EXCITEMENT) and the Israel Science Foundation grant 880/12.
2013
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
sogaard-johannsen-2012-robust
https://aclanthology.org/C12-2114.pdf
Robust Learning in Random Subspaces: Equipping NLP for OOV Effects
Inspired by work on robust optimization we introduce a subspace method for learning linear classifiers for natural language processing that are robust to out-of-vocabulary effects. The method is applicable in live-stream settings where new instances may be sampled from different and possibly also previously unseen domains. In text classification and part-of-speech (POS) tagging, robust perceptrons and robust stochastic gradient descent (SGD) with hinge loss achieve average error reductions of up to 18% when evaluated on out-of-domain data.
false
[]
[]
null
null
null
null
2012
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
dienes-dubey-2003-deep
https://aclanthology.org/P03-1055.pdf
Deep Syntactic Processing by Combining Shallow Methods
We present a novel approach for finding discontinuities that outperforms previously published results on this task. Rather than using a deeper grammar formalism, our system combines a simple unlexicalized PCFG parser with a shallow pre-processor. This pre-processor, which we call a trace tagger, does surprisingly well on detecting where discontinuities can occur without using phase structure information.
false
[]
[]
null
null
null
The authors would like to thank Jason Baldridge, Matthew Crocker, Geert-Jan Kruijff, Miles Osborne and the anonymous reviewers for many helpful comments.
2003
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
strzalkowski-scheyen-1993-evaluation
https://aclanthology.org/1993.iwpt-1.23.pdf
Evaluation of TTP Parser: A Preliminary Report
TTP Tagged • Text Parser) is a fast and robust natural language parser specifically designed to process vast quantities of unrestricted text. TTP can analyze written text at the speed of approximately 0.3 sec/sentence, or 73 words per second. An important novel feature of TTP parser is that it is equipped with a skip-and-fit recovery mechanism that allows for fast closing of more difficult sub-constituents after a preset amount of time has elapsed without producing a parse. Although a complete analysis is attempted for each sentence, the parser may occasionally ignore fragments of input to resume "normal" processing after skipping a few words. These fragments are later analyzed separately and attached as incomplete constituents to the main parse tree. TTP has recently been evaluated against several leading parsers. While no formal numbers were released (a formal evaluation is planned later this year), TTP has. performed surprisingly well. The main argument of this paper is that TTP can provide a substantial gain in parsing speed giving up relatively little in terms of the quality of output it produces. This property allows TTP to be used effectively in parsing large volumes of text.
false
[]
[]
null
null
null
null
1993
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
srikumar-etal-2008-extraction
https://aclanthology.org/P08-1117.pdf
Extraction of Entailed Semantic Relations Through Syntax-Based Comma Resolution
This paper studies textual inference by investigating comma structures, which are highly frequent elements whose major role in the extraction of semantic relations has not been hitherto recognized. We introduce the problem of comma resolution, defined as understanding the role of commas and extracting the relations they imply. We show the importance of the problem using examples from Textual Entailment tasks, and present A Sentence Transformation Rule Learner (ASTRL), a machine learning algorithm that uses a syntactic analysis of the sentence to learn sentence transformation rules that can then be used to extract relations. We have manually annotated a corpus identifying comma structures and relations they entail and experimented with both gold standard parses and parses created by a leading statistical parser, obtaining F-scores of 80.2% and 70.4% respectively.
false
[]
[]
null
null
null
The UIUC authors were supported by NSF grant ITR IIS-0428472, DARPA funding under the Bootstrap Learning Program and a grant from Boeing.
2008
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
gyawali-etal-2013-native
https://aclanthology.org/W13-1729.pdf
Native Language Identification: a Simple n-gram Based Approach
This paper describes our approaches to Native Language Identification (NLI) for the NLI shared task 2013. NLI as a sub area of author profiling focuses on identifying the first language of an author given a text in his second language. Researchers have reported several sets of features that have achieved relatively good performance in this task. The type of features used in such works are: lexical, syntactic and stylistic features, dependency parsers, psycholinguistic features and grammatical errors. In our approaches, we selected lexical and syntactic features based on n-grams of characters, words, Penn TreeBank (PTB) and Universal Parts Of Speech (POS) tagsets, and perplexity values of character of n-grams to build four different models. We also combine all the four models using an ensemble based approach to get the final result. We evaluated our approach over a set of 11 native languages reaching 75% accuracy.
false
[]
[]
null
null
null
We would like to thank the organizers of NLI shared task 2013. We would also like to thank CONACyT for its partial support of this work under scholarship 310473.
2013
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
howell-etal-2017-inferring
https://aclanthology.org/W17-0110.pdf
Inferring Case Systems from IGT: Enriching the Enrichment
In this paper, we apply two methodologies of data enrichment to predict the case systems of languages from a diverse and complex data set. The methodologies are based on those of Bender et al. (2013), but we extend them to work with a new data format and apply them to a new dataset. In doing so, we explore the effects of noise and inconsistency on the proposed algorithms. Our analysis reveals assumptions in the previous work that do not hold up in less controlled data sets.
false
[]
[]
null
null
null
This material is based upon work supported by the National Science Foundation under Grant No. BCS-1561833.
2017
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
castro-castro-etal-2015-authorship
https://aclanthology.org/R15-1012.pdf
Authorship Verification, Average Similarity Analysis
Authorship analysis is an important task for different text applications, for example in the field of digital forensic text analysis. Hence, we propose an authorship analysis method that compares the average similarity of a text of unknown authorship with all the text of an author. Using this idea, a text that was not written by an author, would not exceed the average of similarity with known texts and only the text of unknown authorship would be considered as written by the author, if it exceeds the average of similarity obtained between texts written by him. The experiments were realized using the data provided in PAN 2014 competition for Spanish articles for the task of authorship verification. We realize experiments using different similarity functions and 17 linguistics features. We analyze the results obtained with each pair function-features against the baseline of the competition. Additionally, we introduce a text filtering phase that delete all the sample text of an author that are more similar to the samples of other author, with the idea to reduce confusion or non-representative text, and finally we analyze new experiments to compare the results with the data obtained without filtering.
false
[]
[]
null
null
null
This research has been partially funded by the Spanish Ministry of Science and Innovation (TIN2012-38536-C03-03)
2015
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
nan-etal-2021-entity
https://aclanthology.org/2021.eacl-main.235.pdf
Entity-level Factual Consistency of Abstractive Text Summarization
A key challenge for abstractive summarization is ensuring factual consistency of the generated summary with respect to the original document. For example, state-ofthe-art models trained on existing datasets exhibit entity hallucination, generating names of entities that are not present in the source document. We propose a set of new metrics to quantify the entity-level factual consistency of generated summaries and we show that the entity hallucination problem can be alleviated by simply filtering the training data. In addition, we propose a summary-worthy entity classification task to the training process as well as a joint entity and summary generation approach, which yield further improvements in entity level metrics.
false
[]
[]
null
null
null
null
2021
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
joty-etal-2010-exploiting
https://aclanthology.org/D10-1038.pdf
Exploiting Conversation Structure in Unsupervised Topic Segmentation for Emails
This work concerns automatic topic segmentation of email conversations. We present a corpus of email threads manually annotated with topics, and evaluate annotator reliability. To our knowledge, this is the first such email corpus. We show how the existing topic segmentation models (i.e., Lexical Chain Segmenter (LCSeg) and Latent Dirichlet Allocation (LDA)) which are solely based on lexical information, can be applied to emails. By pointing out where these methods fail and what any desired model should consider, we propose two novel extensions of the models that not only use lexical information but also exploit finer level conversation structure in a principled way. Empirical evaluation shows that LCSeg is a better model than LDA for segmenting an email thread into topical clusters and incorporating conversation structure into these models improves the performance significantly.
false
[]
[]
null
null
null
We are grateful to the 6 pilot annotators, 3 test annotators and to the 3 anonymous reviewers for their helpful comments. This work was supported in part by NSERC PGS award, NSERC BIN project, NSERC discovery grant and Institute for Computing, Information and Cognitive Systems (ICICS) at UBC.
2010
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
qian-etal-2016-modal
https://aclanthology.org/2016.lilt-14.2.pdf
Modal Subordination in Type Theoretic Dynamic Logic
Classical theories of discourse semantics, such as Discourse Representation Theory (DRT), Dynamic Predicate Logic (DPL), predict that an indefinite noun phrase cannot serve as antecedent for an anaphor if the noun phrase is, but the anaphor is not, in the scope of a modal expression. However, this prediction meets with counterexamples. The phenomenon modal subordination is one of them. In general, modal subordination is concerned with more than two modalities, where the modality in subsequent sentences is interpreted in a context 'subordinate' to the one created by the first modal expression. In other words, subsequent sentences are interpreted as being conditional on the scenario introduced in the first sentence. One consequence is that the anaphoric potential of indefinites may extend beyond the standard limits of accessibility constraints. This paper aims to give a formal interpretation on modal subordination. The theoretical backbone of the current work is Type Theoretic Dynamic Logic (TTDL), which is a Montagovian account of discourse semantics. Different from other dynamic theories, TTDL was built on classical mathematical and logical tools, such as λ-calculus and Church's theory of types. Hence it is completely compositional and does not suffer from the destructive assignment problem. We will review the basic setup of TTDL and then present Kratzer's theory on natural language modality. After that, by integrating the notion of con
false
[]
[]
null
null
null
null
2016
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
sato-nakagawa-2007-bayesian
https://aclanthology.org/D07-1044.pdf
Bayesian Document Generative Model with Explicit Multiple Topics
In this paper, we proposed a novel probabilistic generative model to deal with explicit multiple-topic documents: Parametric Dirichlet Mixture Model(PDMM). PDMM is an expansion of an existing probabilistic generative model: Parametric Mixture Model(PMM) by hierarchical Bayes model. PMM models multiple-topic documents by mixing model parameters of each single topic with an equal mixture ratio. PDMM models multiple-topic documents by mixing model parameters of each single topic with mixture ratio following Dirichlet distribution. We evaluate PDMM and PMM by comparing F-measures using MEDLINE corpus. The evaluation showed that PDMM is more effective than PMM.
false
[]
[]
null
null
null
Acknowledgement This research was funded in part by MEXT Grant-in-Aid for Scientific Research on Priority Areas "i-explosion" in Japan.
2007
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
van-noord-2004-error
https://aclanthology.org/P04-1057.pdf
Error Mining for Wide-Coverage Grammar Engineering
Parsing systems which rely on hand-coded linguistic descriptions can only perform adequately in as far as these descriptions are correct and complete. The paper describes an error mining technique to discover problems in hand-coded linguistic descriptions for parsing such as grammars and lexicons. By analysing parse results for very large unannotated corpora, the technique discovers missing, incorrect or incomplete linguistic descriptions. The technique uses the frequency of n-grams of words for arbitrary values of n. It is shown how a new combination of suffix arrays and perfect hash finite automata allows an efficient implementation.
false
[]
[]
null
null
null
This research was supported by the PIONIER project Algorithms for Linguistic Processing funded by NWO.
2004
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
fourla-yannoutsou-1998-implementing
https://link.springer.com/chapter/10.1007/3-540-49478-2_27.pdf
Implementing MT in the Greek public sector
null
false
[]
[]
null
null
null
null
1998
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
chen-moschitti-2018-learning
https://aclanthology.org/C18-1185.pdf
Learning to Progressively Recognize New Named Entities with Sequence to Sequence Models
In this paper, we propose to use a sequence to sequence model for Named Entity Recognition (NER) and we explore the effectiveness of such model in a progressive NER setting-a Transfer Learning (TL) setting. We train an initial model on source data and transfer it to a model that can recognize new NE categories in the target data during a subsequent step, when the source data is no longer available. Our solution consists in: (i) to reshape and re-parametrize the output layer of the first learned model to enable the recognition of new NEs; (ii) to leave the rest of the architecture unchanged, such that it is initialized with parameters transferred from the initial model; and (iii) to fine tune the network on the target data. Most importantly, we design a new NER approach based on sequence to sequence (Seq2Seq) models, which can intuitively work better in our progressive setting. We compare our approach with a Bidirectional LSTM, which is a strong neural NER model. Our experiments show that the Seq2Seq model performs very well on the standard NER setting and it is more robust in the progressive setting. Our approach can recognize previously unseen NE categories while preserving the knowledge of the seen data.
false
[]
[]
null
null
null
This research was partially supported by Almawave S.r.l. We would like to thank Giuseppe Castellucci, Andrea Favalli, and Raniero Romagnoli for inspiring this work with useful discussions on neural models for applications to real-world problems in the industrial world.
2018
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
hua-etal-2021-dyploc
https://aclanthology.org/2021.acl-long.501.pdf
DYPLOC: Dynamic Planning of Content Using Mixed Language Models for Text Generation
We study the task of long-form opinion text generation, which faces at least two distinct challenges. First, existing neural generation models fall short of coherence, thus requiring efficient content planning. Second, diverse types of information are needed to guide the generator to cover both subjective and objective content. To this end, we propose DY-PLOC, a generation framework that conducts dynamic planning of content while generating the output based on a novel design of mixed language models. To enrich the generation with diverse content, we further propose to use large pre-trained models to predict relevant concepts and to generate claims. We experiment with two challenging tasks on newly collected datasets: (1) argument generation with Reddit ChangeMyView, and (2) writing articles using New York Times' Opinion section. Automatic evaluation shows that our model significantly outperforms competitive comparisons. Human judges further confirm that our generations are more coherent with richer content.
false
[]
[]
null
null
null
This research is supported in part by National Science Foundation through Grant IIS-1813341. We thank three anonymous reviewers for their valuable suggestions on various aspects of this work.
2021
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
song-etal-2017-learning
https://aclanthology.org/K17-1016.pdf
Learning Word Representations with Regularization from Prior Knowledge
Conventional word embeddings are trained with specific criteria (e.g., based on language modeling or co-occurrence) inside a single information source, disregarding the opportunity for further calibration using external knowledge. This paper presents a unified framework that leverages pre-learned or external priors, in the form of a regularizer, for enhancing conventional language model-based embedding learning. We consider two types of regularizers. The first type is derived from topic distribution by running latent Dirichlet allocation on unlabeled data. The second type is based on dictionaries that are created with human annotation efforts. To effectively learn with the regularizers, we propose a novel data structure, trajectory softmax, in this paper. The resulting embeddings are evaluated by word similarity and sentiment classification. Experimental results show that our learning framework with regularization from prior knowledge improves embedding quality across multiple datasets, compared to a diverse collection of baseline methods.
false
[]
[]
null
null
null
null
2017
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
galley-etal-2015-deltableu
https://aclanthology.org/P15-2073.pdf
deltaBLEU: A Discriminative Metric for Generation Tasks with Intrinsically Diverse Targets
We introduce Discriminative BLEU (∆BLEU), a novel metric for intrinsic evaluation of generated text in tasks that admit a diverse range of possible outputs. Reference strings are scored for quality by human raters on a scale of [−1, +1] to weight multi-reference BLEU. In tasks involving generation of conversational responses, ∆BLEU correlates reasonably with human judgments and outperforms sentence-level and IBM BLEU in terms of both Spearman's ρ and Kendall's τ .
false
[]
[]
null
null
null
We thank the anonymous reviewers, Jian-Yun Nie, and Alan Ritter for their helpful comments and suggestions.
2015
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
zhekova-kubler-2010-ubiu
https://aclanthology.org/S10-1019.pdf
UBIU: A Language-Independent System for Coreference Resolution
We present UBIU, a language independent system for detecting full coreference chains, composed of named entities, pronouns, and full noun phrases which makes use of memory based learning and a feature model following Rahman and Ng (2009). UBIU is evaluated on the task "Coreference Resolution in Multiple Languages" (SemEval Task 1 (Recasens et al., 2010)) in the context of the 5th International Workshop on Semantic Evaluation.
false
[]
[]
null
null
null
null
2010
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
chen-etal-2019-facebook
https://aclanthology.org/D19-5213.pdf
Facebook AI's WAT19 Myanmar-English Translation Task Submission
This paper describes Facebook AI's submission to the WAT 2019 Myanmar-English translation task (Nakazawa et al., 2019). Our baseline systems are BPE-based transformer models. We explore methods to leverage monolingual data to improve generalization, including self-training, back-translation and their combination. We further improve results by using noisy channel re-ranking and ensembling. We demonstrate that these techniques can significantly improve not only a system trained with additional monolingual data, but even the baseline system trained exclusively on the provided small parallel dataset. Our system ranks first in both directions according to human evaluation and BLEU, with a gain of over 8 BLEU points above the second best system.
false
[]
[]
null
null
null
The Authors wish to thank Sergey Edunov for sharing precious insights about his experience participating in WMT competitions and Htet Linn for feedback on how spacing is used in Burmese and for checking a handful of translations during early development.
2019
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
narsale-2010-jhu
https://aclanthology.org/W10-1746.pdf
JHU System Combination Scheme for WMT 2010
This paper describes the JHU system combination scheme that was used in the WMT 2010 submission. The incremental alignment scheme of (Karakos et.al, 2008) was used for confusion network generation. The system order in the alignment of each sentence was learned using SVMs, following the work of (Karakos et.al, 2010). Additionally, web-scale n-grams from the Google corpus were used to build language models that improved the quality of the combination output. Experiments in Spanish-English, French-English, German-English and Czech-English language pairs were conducted, and the results show approximately 1 BLEU point and 2 TER points improvement over the best individual system.
false
[]
[]
null
null
null
This work was partially supported by the DARPA GALE program Grant No HR0022-06-2-0001. I would like to thank all the participants of WMT 2010 for their system outputs. I would also like to thank Prof. Damianos Karakos for his guidance and support. Many thanks go to the Center for Language and Speech Processing at Johns Hopkins University for availability of their computer clusters.
2010
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
owoputi-etal-2013-improved
https://aclanthology.org/N13-1039.pdf
Improved Part-of-Speech Tagging for Online Conversational Text with Word Clusters
We consider the problem of part-of-speech tagging for informal, online conversational text. We systematically evaluate the use of large-scale unsupervised word clustering and new lexical features to improve tagging accuracy. With these features, our system achieves state-of-the-art tagging results on both Twitter and IRC POS tagging tasks; Twitter tagging is improved from 90% to 93% accuracy (more than 3% absolute). Qualitative analysis of these word clusters yields insights about NLP and linguistic phenomena in this genre. Additionally, we contribute the first POS annotation guidelines for such text and release a new dataset of English language tweets annotated using these guidelines. Tagging software, annotation guidelines, and large-scale word clusters are available at: http://www.ark.cs.cmu.edu/TweetNLP This paper describes release 0.3 of the "CMU Twitter Part-of-Speech Tagger" and annotated data.
false
[]
[]
null
null
null
This research was supported in part by the National Science Foundation (IIS-0915187 and IIS-1054319).
2013
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
kang-etal-2020-neural
https://aclanthology.org/2020.emnlp-main.493.pdf
Neural Mask Generator: Learning to Generate Adaptive Word Maskings for Language Model Adaptation
We propose a method to automatically generate a domain-and task-adaptive maskings of the given text for self-supervised pre-training, such that we can effectively adapt the language model to a particular target task (e.g. question answering). Specifically, we present a novel reinforcement learning-based framework which learns the masking policy, such that using the generated masks for further pre-training of the target language model helps improve task performance on unseen texts. We use off-policy actor-critic with entropy regularization and experience replay for reinforcement learning, and propose a Transformer-based policy network that can consider the relative importance of words in a given text. We validate our Neural Mask Generator (NMG) on several question answering and text classification datasets using BERT and DistilBERT as the language models, on which it outperforms rule-based masking strategies, by automatically learning optimal adaptive maskings. 1
false
[]
[]
null
null
null
null
2020
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
ligozat-2013-question
https://aclanthology.org/P13-2076.pdf
Question Classification Transfer
Question answering systems have been developed for many languages, but most resources were created for English, which can be a problem when developing a system in another language such as French. In particular, for question classification, no labeled question corpus is available for French, so this paper studies the possibility to use existing English corpora and transfer a classification by translating the question and their labels. By translating the training corpus, we obtain results close to a monolingual setting.
false
[]
[]
null
null
null
null
2013
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
hozumi-etal-1993-integration
https://aclanthology.org/1993.iwpt-1.10.pdf
Integration of Morphological and Syntactic Analysis Based on LR Parsing Algorithm
Morphological analysis of Japanese is very different from that of English, because no spaces are placed between words. The analysis includes segmentation of words. However, ambiguities in segmentation is not always resolved only with morphological information. This paper proposes a method to integrate the morphological and syntactic analysis based on LR parsing algorithm. An LR table derived from grammar rules is modified on the basis of connectabilities between two adjacent words. The modified LR table reflects both the morphological and syntactic constraints. Using the LR table, efficient morphological and syntactic analysis is available.
false
[]
[]
null
null
null
null
1993
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
maruf-etal-2021-explaining
https://aclanthology.org/2021.inlg-1.12.pdf
Explaining Decision-Tree Predictions by Addressing Potential Conflicts between Predictions and Plausible Expectations
We offer an approach to explain Decision Tree (DT) predictions by addressing potential conflicts between aspects of these predictions and plausible expectations licensed by background information. We define four types of conflicts, operationalize their identification, and specify explanatory schemas that address them. Our human evaluation focused on the effect of explanations on users' understanding of a DT's reasoning and their willingness to act on its predictions. The results show that (1) explanations that address potential conflicts are considered at least as good as baseline explanations that just follow a DT path; and (2) the conflictbased explanations are deemed especially valuable when users' expectations disagree with the DT's predictions.
false
[]
[]
null
null
null
This research was supported in part by grant DP190100006 from the Australian Research Council. We thank Marko Bohanec, one of the creators of the Nursery dataset, for helping us understand the features and their values. We also thank the anonymous reviewers for their helpful comments.
2021
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false