ID
stringlengths
11
54
url
stringlengths
33
64
title
stringlengths
11
184
abstract
stringlengths
17
3.87k
label_nlp4sg
bool
2 classes
task
sequence
method
sequence
goal1
stringclasses
9 values
goal2
stringclasses
9 values
goal3
stringclasses
1 value
acknowledgments
stringlengths
28
1.28k
year
stringlengths
4
4
sdg1
bool
1 class
sdg2
bool
1 class
sdg3
bool
2 classes
sdg4
bool
2 classes
sdg5
bool
2 classes
sdg6
bool
1 class
sdg7
bool
1 class
sdg8
bool
2 classes
sdg9
bool
2 classes
sdg10
bool
2 classes
sdg11
bool
2 classes
sdg12
bool
1 class
sdg13
bool
2 classes
sdg14
bool
1 class
sdg15
bool
1 class
sdg16
bool
2 classes
sdg17
bool
2 classes
rinsche-2004-ltc
https://aclanthology.org/2004.eamt-1.18
LTC Communicator -- a web-based e-communication tool
Software vendors operating in international markets face two problems: first, products must be localised to meet the requirements of each target country; then there is the need to support diverse customers, where end-users may not speak the same language as the helpdesk. Localisation (new versions of screens, help text and documentation), while not cheap, is relatively well understood, with many companies providing expertise and tools. The problem of multilingual user support is much more complex, with few off-the-shelf solutions available. LTC-Communicator, a software product from the Language Technology Centre Ltd, offers an innovative and cost-effective response to this growing need.
false
[]
[]
null
null
null
null
2004
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
mehri-eskenazi-2021-gensf
https://aclanthology.org/2021.sigdial-1.51
GenSF: Simultaneous Adaptation of Generative Pre-trained Models and Slot Filling
In transfer learning, it is imperative to achieve strong alignment between a pre-trained model and a downstream task. Prior work has done this by proposing task-specific pre-training objectives, which sacrifices the inherent scalability of the transfer learning paradigm. We instead achieve strong alignment by simultaneously modifying both the pre-trained model and the formulation of the downstream task, which is more efficient and preserves the scalability of transfer learning. We present GENSF (Generative Slot Filling), which leverages a generative pre-trained open-domain dialog model for slot filling. GENSF (1) adapts the pre-trained model by incorporating inductive biases about the task and (2) adapts the downstream task by reformulating slot filling to better leverage the pre-trained model's capabilities. GENSF achieves state-of-the-art results on two slot filling datasets with strong gains in few-shot and zero-shot settings. We achieve a 9 F 1 score improvement in zeroshot slot filling. This highlights the value of strong alignment between the pre-trained model and the downstream task.
false
[]
[]
null
null
null
null
2021
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
ustun-etal-2019-cross
https://aclanthology.org/R19-1140
Cross-Lingual Word Embeddings for Morphologically Rich Languages
Cross-lingual word embedding models learn a shared vector space for two or more languages so that words with similar meaning are represented by similar vectors regardless of their language. Although the existing models achieve high performance on pairs of morphologically simple languages, they perform very poorly on morphologically rich languages such as Turkish and Finnish. In this paper, we propose a morpheme-based model in order to increase the performance of crosslingual word embeddings on morphologically rich languages. Our model includes a simple extension which enables us to exploit morphemes for cross-lingual mapping. We applied our model for the Turkish-Finnish language pair on the bilingual word translation task. Results show that our model outperforms the baseline models by 2% in the nearest neighbour ranking.
false
[]
[]
null
null
null
null
2019
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
antoine-1996-parsing
https://aclanthology.org/C96-1010
Parsing spoken language without syntax
Parsing spontaneous speech is a difficult task because of the ungrammatical nature of most spoken utterances. To overpass this problem, we propose in this paper to handle the spoken language without considering syntax. We describe thus a microsemantic parser which is uniquely based on an associative network of semantic priming. Experimental results on spontaneous speech show that this parser stands for a robust alternative to standard ones.
false
[]
[]
null
null
null
null
1996
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
chiang-etal-1995-robust
https://aclanthology.org/J95-3002
Robust Learning, Smoothing, and Parameter Tying on Syntactic Ambiguity Resolution
Statistical approaches to natural language processing generally obtain the parameters by using the maximum likelihood estimation (MLE) method. The MLE approaches, however, may fail to achieve good performance in difficult tasks, because the discrimination and robustness issues are not taken into consideration in the estimation processes. Motivated by that concern, a discrimination-and robustness-oriented learning algorithm is proposed in this paper for minimizing the error rate. In evaluating the robust learning procedure on a corpus of 1,000 sentences, 64.3% of the sentences are assigned their correct syntactic structures, while only 53.1% accuracy rate is obtained with the MLE approach. In addition, parameters are usually estimated poorly when the training data is sparse. Smoothing the parameters is thus important in the estimation process. Accordingly, we use a hybrid approach combining the robust learning procedure with the smoothing method. The accuracy rate of 69.8% is attained by using this approach. Finally, a parameter tying scheme is proposed to tie those highly correlated but unreliably estimated parameters together so that the parameters can be better trained in the learning process. With this tying scheme, the number of parameters is reduced by a factor of 2,000 (from 8.7 x 108 to 4.2 x lOS), and the accuracy rate for parse tree selection is improved up to 70.3% when the robust learning procedure is applied on the tied parameters.
false
[]
[]
null
null
null
This research is supported by the R.O.C. National Science Council under NSC 82-0408-E-007-059 project. We would like to thank the Behavior Design Corporation (BDC) for providing us with the parsed corpus. Jing-Shin Chang has given valuable suggestions for writing this paper, in particular for the comparison with Briscoe and Carroll's approach. Also, four anonymous reviewers' comments on earlier drafts were very helpful to us in preparing the final version.
1995
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
gerber-etal-2010-open
https://aclanthology.org/W10-0906
Open-domain Commonsense Reasoning Using Discourse Relations from a Corpus of Weblog Stories
We present a method of extracting opendomain commonsense knowledge by applying discourse parsing to a large corpus of personal stories written by Internet authors. We demonstrate the use of a linear-time, joint syntax/discourse dependency parser for this purpose, and we show how the extracted discourse relations can be used to generate opendomain textual inferences. Our evaluations of the discourse parser and inference models show some success, but also identify a number of interesting directions for future work.
false
[]
[]
null
null
null
The authors would like to thank the anonymous reviewers for their helpful comments and suggestions. The project or effort described here has been sponsored by the U.S. Army Research, Development, and Engineering Command (RDECOM). Statements and opinions expressed do not necessarily reflect the position or the policy of the United States Government, and no official endorsement should be inferred.
2010
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
martin-1991-conventional
https://aclanthology.org/W91-0206
Conventional Metaphor and the Lexicon
Metaphor and other forms of non-literal language are essential parts of language which have direct bearing on theories of lexical semantics. Neither narrow theories of lexical semantics, nor theories relying solely on world knowledge are sufficient to account for our ability to generate and interpret non-literal language. This paper presents an emerging approach that may provide such an account. This approach is based on systematic representations that capture non-literal language conventions, and mechanisms that can dynamically understand and learn new uses as they are encountered.
false
[]
[]
null
null
null
null
1991
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
park-etal-2021-blames
https://aclanthology.org/2021.findings-acl.358
Who Blames or Endorses Whom? Entity-to-Entity Directed Sentiment Extraction in News Text
Understanding who blames or supports whom in news text is a critical research question in computational social science. Traditional methods and datasets for sentiment analysis are, however, not suitable for the domain of political text as they do not consider the direction of sentiments expressed between entities. In this paper, we propose a novel NLP task of identifying directed sentiment relationship between political entities from a given news document, which we call directed sentiment extraction. From a million-scale news corpus, we construct a dataset of news sentences where sentiment relations of political entities are manually annotated. We present a simple but effective approach for utilizing a pretrained transformer, which infers the target class by predicting multiple question-answering tasks and combining the outcomes. We demonstrate the utility of our proposed method for social science research questions by analyzing positive and negative opinions between political entities in two major events: 2016 U.S. presidential election and COVID-19. The newly proposed problem, data, and method will facilitate future studies on interdisciplinary NLP methods and applications. 1 * This work was done while the first author was a postdoctoral researcher at UCLA.
true
[]
[]
Peace, Justice and Strong Institutions
null
null
We thank anonymous reviewers for their valuable comments. This work was supported by NSF SBE/SMA #1831848 "RIDIR: Integrated Communication Database and Computational Tools".
2021
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
true
false
lee-etal-1990-logic
https://aclanthology.org/O90-1003
A Logic-based Temporal Knowledge Representation in Mandarin Chinese
null
false
[]
[]
null
null
null
null
1990
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
hellman-etal-2020-multiple
https://aclanthology.org/2020.bea-1.3
Multiple Instance Learning for Content Feedback Localization without Annotation
Automated Essay Scoring (AES) can be used to automatically generate holistic scores with reliability comparable to human scoring. In addition, AES systems can provide formative feedback to learners, typically at the essay level. In contrast, we are interested in providing feedback specialized to the content of the essay, and specifically for the content areas required by the rubric. A key objective is that the feedback should be localized alongside the relevant essay text. An important step in this process is determining where in the essay the rubric designated points and topics are discussed. A natural approach to this task is to train a classifier using manually annotated data; however, collecting such data is extremely resource intensive. Instead, we propose a method to predict these annotation spans without requiring any labeled annotation data. Our approach is to consider AES as a Multiple Instance Learning (MIL) task. We show that such models can both predict content scores and localize content by leveraging their sentence-level score predictions. This capability arises despite never having access to annotation training data. Implications are discussed for improving formative feedback and explainable AES models.
false
[]
[]
null
null
null
We would like to thank Alok Baikadi, Julio Bradford, Jill Budden, Amy Burkhardt, Dave Farnham, Andrew Gorman and Jorge Roccatagliata for their efforts in collecting the annotated dataset used in this work.
2020
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
stanojevic-simaan-2014-beer
https://aclanthology.org/W14-3354
BEER: BEtter Evaluation as Ranking
We present the UvA-ILLC submission of the BEER metric to WMT 14 metrics task. BEER is a sentence level metric that can incorporate a large number of features combined in a linear model. Novel contributions are (1) efficient tuning of a large number of features for maximizing correlation with human system ranking, and (2) novel features that give smoother sentence level scores.
false
[]
[]
null
null
null
This work is supported by STW grant nr. 12271 and NWO VICI grant nr. 277-89-002.
2014
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
ruangrajitpakorn-etal-2009-syntactic
https://aclanthology.org/W09-3414
A Syntactic Resource for Thai: CG Treebank
This paper presents Thai syntactic resource: Thai CG treebank, a categorial approach of language resources. Since there are very few Thai syntactic resources, we designed to create treebank based on CG formalism. Thai corpus was parsed with existing CG syntactic dictionary and LALR parser. The correct parsed trees were collected as preliminary CG treebank. It consists of 50,346 trees from 27,239 utterances. Trees can be split into three grammatical types. There are 12,876 sentential trees, 13,728 noun phrasal trees, and 18,342 verb phrasal trees. There are 17,847 utterances that obtain one tree, and an average tree per an utterance is 1.85.
false
[]
[]
null
null
null
null
2009
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
ruppenhofer-etal-2020-fine
https://aclanthology.org/2020.lrec-1.566
Fine-grained Named Entity Annotations for German Biographic Interviews
We present a fine-grained NER annotations scheme with 30 labels and apply it to German data. Building on the OntoNotes 5.0 NER inventory, our scheme is adapted for a corpus of transcripts of biographic interviews by adding categories for AGE and LAN(guage) and also adding label classes for various numeric and temporal expressions. Applying the scheme to the spoken data as well as a collection of teaser tweets from newspaper sites, we can confirm its generality for both domains, also achieving good inter-annotator agreement. We also show empirically how our inventory relates to the well-established 4-category NER inventory by re-annotating a subset of the GermEval 2014 NER coarse-grained dataset with our fine label inventory. Finally, we use a BERT-based system to establish some baselines for NER tagging on our two new datasets. Global results in in-domain testing are quite high on the two datasets, near what was achieved for the coarse inventory on the CoNLLL2003 data. Cross-domain testing produces much lower results due to the severe domain differences.
false
[]
[]
null
null
null
We would like to thank Hanna Strub for her support in performing the annotations. . and Du-Nour, M. (2004). Wir sind die Letzten.
2020
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
tanaka-1995-edr
https://aclanthology.org/1995.mtsummit-1.17
The EDR Electronic Dictionary as information infrastructure
null
false
[]
[]
null
null
null
null
1995
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
devault-etal-2009-finish
https://aclanthology.org/W09-3902
Can I Finish? Learning When to Respond to Incremental Interpretation Results in Interactive Dialogue
We investigate novel approaches to responsive overlap behaviors in dialogue systems, opening possibilities for systems to interrupt, acknowledge or complete a user's utterance while it is still in progress. Our specific contributions are a method for determining when a system has reached a point of maximal understanding of an ongoing user utterance, and a prototype implementation that shows how systems can use this ability to strategically initiate system completions of user utterances. More broadly, this framework facilitates the implementation of a range of overlap behaviors that are common in human dialogue, but have been largely absent in dialogue systems.
false
[]
[]
null
null
null
The project or effort described here has been sponsored by the U.S. Army Research, Development, and Engineering Command (RDECOM). Statements and opinions expressed do not necessarily reflect the position or the policy of the United States Government, and no official endorsement should be inferred. We would also like to thank Anton Leuski for facilitating the use of incremental speech results, and David Schlangen and the ICT dialogue group, for helpful discussions.
2009
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
lembersky-etal-2012-adapting
https://aclanthology.org/E12-1026
Adapting Translation Models to Translationese Improves SMT
Translation models used for statistical machine translation are compiled from parallel corpora; such corpora are manually translated, but the direction of translation is usually unknown, and is consequently ignored. However, much research in Translation Studies indicates that the direction of translation matters, as translated language (translationese) has many unique properties. Specifically, phrase tables constructed from parallel corpora translated in the same direction as the translation task perform better than ones constructed from corpora translated in the opposite direction. We reconfirm that this is indeed the case, but emphasize the importance of using also texts translated in the 'wrong' direction. We take advantage of information pertaining to the direction of translation in constructing phrase tables, by adapting the translation model to the special properties of translationese. We define entropybased measures that estimate the correspondence of target-language phrases to translationese, thereby eliminating the need to annotate the parallel corpus with information pertaining to the direction of translation. We show that incorporating these measures as features in the phrase tables of statistical machine translation systems results in consistent, statistically significant improvement in the quality of the translation.
false
[]
[]
null
null
null
We are grateful to Cyril Goutte, George Foster and Pierre Isabelle for providing us with an annotated version of the Hansard corpus. This research was supported by the Israel Science Foundation (grant No. 137/06) and by a grant from the Israeli Ministry of Science and Technology.
2012
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
yencken-baldwin-2008-measuring
https://aclanthology.org/C08-1131
Measuring and Predicting Orthographic Associations: Modelling the Similarity of Japanese Kanji
As human beings, our mental processes for recognising linguistic symbols generate perceptual neighbourhoods around such symbols where confusion errors occur. Such neighbourhoods also provide us with conscious mental associations between symbols. This paper formalises orthographic models for similarity of Japanese kanji, and provides a proofof-concept dictionary extension leveraging the mental associations provided by orthographic proximity.
false
[]
[]
null
null
null
null
2008
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
liu-etal-2019-knowledge-augmented
https://aclanthology.org/N19-1117
Knowledge-Augmented Language Model and Its Application to Unsupervised Named-Entity Recognition
Traditional language models are unable to efficiently model entity names observed in text. All but the most popular named entities appear infrequently in text providing insufficient context. Recent efforts have recognized that context can be generalized between entity names that share the same type (e.g., person or location) and have equipped language models with access to an external knowledge base (KB). Our Knowledge-Augmented Language Model (KALM) continues this line of work by augmenting a traditional model with a KB. Unlike previous methods, however, we train with an end-to-end predictive objective optimizing the perplexity of text. We do not require any additional information such as named entity tags. In addition to improving language modeling performance, KALM learns to recognize named entities in an entirely unsupervised way by using entity type information latent in the model. On a Named Entity Recognition (NER) task, KALM achieves performance comparable with state-of-the-art supervised models. Our work demonstrates that named entities (and possibly other types of world knowledge) can be modeled successfully using predictive learning and training on large corpora of text without any additional information.
false
[]
[]
null
null
null
null
2019
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
amigo-etal-2004-empirical
https://aclanthology.org/P04-1027
An Empirical Study of Information Synthesis Task
This paper describes an empirical study of the "Information Synthesis" task, defined as the process of (given a complex information need) extracting, organizing and interrelating the pieces of information contained in a set of relevant documents, in order to obtain a comprehensive, non redundant report that satisfies the information need. Two main results are presented: a) the creation of an Information Synthesis testbed with 72 reports manually generated by nine subjects for eight complex topics with 100 relevant documents each; and b) an empirical comparison of similarity metrics between reports, under the hypothesis that the best metric is the one that best distinguishes between manual and automatically generated reports. A metric based on key concepts overlap gives better results than metrics based on n-gram overlap (such as ROUGE) or sentence overlap.
false
[]
[]
null
null
null
This research has been partially supported by a grant of the Spanish Government, project HERMES (TIC-2000-0335-C03-01). We are indebted to E. Hovy for his comments on an earlier version of this paper, and C. Y. Lin for his assistance with the ROUGE measure. Thanks also to our volunteers for their valuable cooperation.
2004
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
meng-etal-2022-rewire
https://aclanthology.org/2022.acl-long.329
Rewire-then-Probe: A Contrastive Recipe for Probing Biomedical Knowledge of Pre-trained Language Models
Knowledge probing is crucial for understanding the knowledge transfer mechanism behind the pre-trained language models (PLMs). Despite the growing progress of probing knowledge for PLMs in the general domain, specialised areas such as biomedical domain are vastly under-explored. To facilitate this, we release a well-curated biomedical knowledge probing benchmark, MedLAMA, constructed based on the Unified Medical Language System (UMLS) Metathesaurus. We test a wide spectrum of state-of-the-art PLMs and probing approaches on our benchmark, reaching at most 3% of acc@10. While highlighting various sources of domain-specific challenges that amount to this underwhelming performance, we illustrate that the underlying PLMs have a higher potential for probing tasks. To achieve this, we propose Contrastive-Probe, a novel self-supervised contrastive probing approach, that adjusts the underlying PLMs without using any probing data. While Contrastive-Probe pushes the acc@10 to 24%, the performance gap remains notable. Our human expert evaluation suggests that the probing performance of our Contrastive-Probe is underestimated as UMLS does not comprehensively cover all existing factual knowledge. We hope MedLAMA and Contrastive-Probe facilitate further developments of more suited probing techniques for this domain. 1
true
[]
[]
Good Health and Well-Being
null
null
Nigel Collier and Zaiqiao Meng kindly acknowledges grant-in-aid support from the UK ESRC for project EPI-AI (ES/T012277/1).
2022
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
false
vincze-2014-uncertainty
https://aclanthology.org/C14-1174
Uncertainty Detection in Hungarian Texts
Uncertainty detection is essential for many NLP applications. For instance, in information retrieval, it is of primary importance to distinguish among factual, negated and uncertain information. Current research on uncertainty detection has mostly focused on the English language, in contrast, here we present the first machine learning algorithm that aims at identifying linguistic markers of uncertainty in Hungarian texts from two domains: Wikipedia and news media. The system is based on sequence labeling and makes use of a rich feature set including orthographic, lexical, morphological, syntactic and semantic features as well. Having access to annotated data from two domains, we also focus on the domain specificities of uncertainty detection by comparing results obtained in indomain and cross-domain settings. Our results show that the domain of the text has significant influence on uncertainty detection.
false
[]
[]
null
null
null
This research was supported by the European Union and the State of Hungary, co-financed by the European Social Fund in the framework of TÁMOP-4.2.4.A/2-11/1-2012-0001 "National Excellence Program".
2014
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
ferret-1998-thematically-segment
https://aclanthology.org/P98-2243
How to Thematically Segment Texts by using Lexical Cohesion?
This article outlines a quantitative method for segmenting texts into thematically coherent units. This method relies on a network of lexical collocations to compute the thematic coherence of the different parts of a text from the lexical cohesiveness of their words. We also present the results of an experiment about locating boundaries between a series of concatened texts.
false
[]
[]
null
null
null
null
1998
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
seon-etal-2008-information
https://aclanthology.org/W08-0803
Information extraction using finite state automata and syllable n-grams in a mobile environment
We propose an information extraction system that is designed for mobile devices with low hardware resources. The proposed system extracts temporal instances (dates and times) and named instances (locations and topics) from Korean short messages in an appointment management domain. To efficiently extract temporal instances with limited numbers of surface forms, the proposed system uses wellrefined finite state automata. To effectively extract various surface forms of named instances with low hardware resources, the proposed system uses a modified HMM based on syllable n-grams. In the experiment on instance boundary labeling, the proposed system showed better performances than traditional classifiers.
false
[]
[]
null
null
null
This research (paper) was funded by Samsung Electronics.
2008
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
zhang-kordoni-2010-discriminant
https://aclanthology.org/C10-2166
Discriminant Ranking for Efficient Treebanking
Treebank annotation is a labor-intensive and time-consuming task. In this paper, we show that a simple statistical ranking model can significantly improve treebanking efficiency by prompting human annotators, well-trained in disambiguation tasks for treebanking but not necessarily grammar experts, to the most relevant linguistic disambiguation decisions. Experiments were carried out to evaluate the impact of such techniques on annotation efficiency and quality. The detailed analysis of outputs from the ranking model shows strong correlation to the human annotator behavior. When integrated into the treebanking environment, the model brings a significant annotation speed-up with improved inter-annotator agreement. †
false
[]
[]
null
null
null
null
2010
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
tatman-etal-2017-non
https://aclanthology.org/W17-2909
Non-lexical Features Encode Political Affiliation on Twitter
Previous work on classifying Twitter users' political alignment has mainly focused on lexical and social network features. This study provides evidence that political affiliation is also reflected in features which have been previously overlooked: users' discourse patterns (proportion of Tweets that are retweets or replies) and their rate of use of capitalization and punctuation. We find robust differences between politically left-and right-leaning communities with respect to these discourse and sub-lexical features, although they are not enough to train a high-accuracy classifier.
true
[]
[]
Peace, Justice and Strong Institutions
null
null
null
2017
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
true
false
choi-etal-1996-logical
https://aclanthology.org/Y96-1014
A Logical Structure for the Construction of Machine Readable Dictionaries
During the last 10 years, there have been many efforts in some areas of Natural Language Processing to encode the normal text or documents into machine readable form. If we encode written data using a canonical form which can be recognized by a computer, we can extract needed information and process and utilize it for another purposes. From this point of view, we present an account of the encoding of a printed dictionary. The construction of a lexicon is very time-consuming and expensive work and the application of the lexicon is restricted. In this paper, we describe a logical structure for Korean printed dictionaries as a general lexical representation based on SDML, which can be transformed into another representation for different application requirements.
false
[]
[]
null
null
null
The research described here was undertaken as a part of the project 'Korea Information Base System' by the support of Ministry of Science & Technology and Ministry of Culture & Sports in Korea and the project 'Multimedia Hangeul Engineering' supported by Samsung Co..
1996
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
brody-2007-clustering
https://aclanthology.org/P07-1057
Clustering Clauses for High-Level Relation Detection: An Information-theoretic Approach
Recently, there has been a rise of interest in unsupervised detection of highlevel semantic relations involving complex units, such as phrases and whole sentences. Typically such approaches are faced with two main obstacles: data sparseness and correctly generalizing from the examples. In this work, we describe the Clustered Clause representation, which utilizes information-based clustering and inter-sentence dependencies to create a simplified and generalized representation of the grammatical clause. We implement an algorithm which uses this representation to detect a predefined set of high-level relations, and demonstrate our model's effectiveness in overcoming both the problems mentioned.
false
[]
[]
null
null
null
The author acknowledges the support of EPSRC grant EP/C538447/1. The author would like to thank Naftali Tishby and Mirella Lapata for their supervision and assistance on large portions of the work presented here. I would also like to thank the anonymous reviewers and my friends and colleagues for their helpful comments.
2007
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
husain-etal-2013-towards
https://aclanthology.org/W13-3713
Towards a Psycholinguistically Motivated Dependency Grammar for Hindi
The overall goal of our work is to build a dependency grammar-based human sentence processor for Hindi. As a first step towards this end, in this paper we present a dependency grammar that is motivated by psycholinguistic concerns. We describe the components of the grammar that have been automatically induced using a Hindi dependency treebank. We relate some aspects of the grammar to relevant ideas in the psycholinguistics literature. In the process, we also extract statistics and patterns for phenomena that are interesting from a processing perspective. We finally present an outline of a dependency grammar-based human sentence processor for Hindi.
false
[]
[]
null
null
null
null
2013
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
hassan-etal-2010-whats
https://aclanthology.org/D10-1121
What's with the Attitude? Identifying Sentences with Attitude in Online Discussions
Mining sentiment from user generated content is a very important task in Natural Language Processing. An example of such content is threaded discussions which act as a very important tool for communication and collaboration in the Web. Threaded discussions include e-mails, e-mail lists, bulletin boards, newsgroups, and Internet forums. Most of the work on sentiment analysis has been centered around finding the sentiment toward products or topics. In this work, we present a method to identify the attitude of participants in an online discussion toward one another. This would enable us to build a signed network representation of participant interaction where every edge has a sign that indicates whether the interaction is positive or negative. This is different from most of the research on social networks that has focused almost exclusively on positive links. The method is experimentally tested using a manually labeled set of discussion posts. The results show that the proposed method is capable of identifying attitudinal sentences, and their signs, with high accuracy and that it outperforms several other baselines.
false
[]
[]
null
null
null
This research was funded by the Office of the Director of National Intelligence (ODNI), Intelligence Advanced Research Projects Activity (IARPA), through the U.S. Army Research Lab. All statements of fact, opinion or conclusions contained herein are those of the authors and should not be construed as representing the official views or policies of IARPA, the ODNI or the U.S. Government.
2010
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
gaskill-2014-reducing
https://aclanthology.org/2014.amta-users.2
Reducing time and tedium with translation technology: the six-pound challenge
null
false
[]
[]
null
null
null
null
2014
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
woods-fernando-2018-improving
https://aclanthology.org/W18-4709
Improving String Processing for Temporal Relations
This paper presents a refinement of the superposition operation on strings which are used to represent temporal relation information such as is found in documents annotated with TimeML. Superposition is made demonstrably more efficient by interleaving generation with testing, rather than generating and then testing. The strings offer compact visual appeal while remaining an attractive option for computation and reasoning. Motivated by Freksa's semi-interval relations, a suggestion is also made for a potential method of representing partial information in these strings so as to allow for analysis at different granularities, and for more flexibility when dealing with cases of ambiguity.
false
[]
[]
null
null
null
This research is supported by Science Foundation Ireland (SFI) through the CNGL Programme (Grant 12/CE/I2267) in the ADAPT Centre (https://www.adaptcentre.ie) at Trinity College Dublin. The ADAPT Centre for Digital Content Technology is funded under the SFI Research Centres Programme (Grant 13/RC/2106) and is co-funded under the European Regional Development Fund.
2018
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
kehler-etal-2004-non
https://aclanthology.org/N04-1037
The (Non)Utility of Predicate-Argument Frequencies for Pronoun Interpretation
State-of-the-art pronoun interpretation systems rely predominantly on morphosyntactic contextual features. While the use of deep knowledge and inference to improve these models would appear technically infeasible, previous work has suggested that predicate-argument statistics mined from naturally-occurring data could provide a useful approximation to such knowledge. We test this idea in several system configurations, and conclude from our results and subsequent error analysis that such statistics offer little or no predictive information above that provided by morphosyntax.
false
[]
[]
null
null
null
This work was supported by the ACE program (www.nist.gov/speech/tests/ACE/).
2004
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
sandrini-etal-2006-maximum
https://aclanthology.org/W06-2601
Maximum Entropy Tagging with Binary and Real-Valued Features
Recent literature on text-tagging reported successful results by applying Maximum Entropy (ME) models. In general, ME taggers rely on carefully selected binary features, which try to capture discriminant information from the training data. This paper introduces a standard setting of binary features, inspired by the literature on named-entity recognition and text chunking, and derives corresponding realvalued features based on smoothed logprobabilities. The resulting ME models have orders of magnitude fewer parameters. Effective use of training data to estimate features and parameters is achieved by integrating a leaving-one-out method into the standard ME training algorithm. Experimental results on two tagging tasks show statistically significant performance gains after augmenting standard binaryfeature models with real-valued features.
false
[]
[]
null
null
null
This work was partially financed by the European Commission under the project FAME (IST-2000-29323), and by the Autonomous Province of Trento under the the FU-PAT project WebFaq.
2006
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
thompson-1981-chart
https://aclanthology.org/P81-1036
Chart Parsing and Rule Schemata in PSG
MCHART is a flexible, modular chart parsing framework I have been developing (in Lisp) at Edinburgh, whose initial design characteristics were largely determined by pedagogical needs. PSG is a gr---n-tical theory developed by Gerald Gazdar at Sussex, in collaboration with others in both the US and Britain, most notably Ivan Sag, Geoff Pull,--, and Ewan Klein.
false
[]
[]
null
null
null
null
1981
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
mutal-etal-2020-copeco
https://aclanthology.org/2020.amta-pemdt.5
COPECO: a Collaborative Post-Editing Corpus in Pedagogical Context
null
true
[]
[]
Quality Education
null
null
null
2020
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
li-etal-2021-ease
https://aclanthology.org/2021.newsum-1.10
EASE: Extractive-Abstractive Summarization End-to-End using the Information Bottleneck Principle
Current abstractive summarization systems outperform their extractive counterparts, but their widespread adoption is inhibited by the inherent lack of interpretability. Extractive summarization systems, though interpretable, suffer from redundancy and possible lack of coherence. To achieve the best of both worlds, we propose EASE, an extractive-abstractive framework that generates concise abstractive summaries that can be traced back to an extractive summary. Our framework can be applied to any evidence-based text generation problem and can accommodate various pretrained models in its simple architecture. We use the Information Bottleneck principle to jointly train the extraction and abstraction in an end-to-end fashion. Inspired by previous research that humans use a two-stage framework to summarize long documents (Jing and McKeown, 2000), our framework first extracts a pre-defined amount of evidence spans and then generates a summary using only the evidence. Using automatic and human evaluations, we show that the generated summaries are better than strong extractive and extractiveabstractive baselines. * Equal contribution. Source Document: (CNN)Mike Rowe is coming to a river near you. "Sometimes, you hear about a person who makes you feel good about humanity, but bad about yourself," Rowe says. On Thursday's episode of "Somebody's Gotta Do It," Rowe meets up with Chad Pregracke, the founder of Living Lands & Waters, who does just that. Pregracke wants to clean up the nation's rivers one piece of detritus at a time. His quota? Always "more." Read Mike Rowe's Facebook post on how to break our litter habit. Since he founded the nonprofit in 1998 at the ripe age of 23, Pregracke and more than 87,000 volunteers have collected 8.4 million pounds of trash from U.S. waterways. Those efforts helped him earn the 2013 CNN Hero of the Year Award, along with numerous other honors. "Wherever you are, no matter if there's a stream, a creek, a lake, whatever, that needs to be cleaned up, you can do it. Just organize it and do it," he told CNN's Anderson Cooper after his win. Pregracke also gives Rowe a tour of the 150-foot, solar-powered barge that the Living Lands & Waters staff calls home during lengthy cleanups. The part-home, part-office, part-dumpster has seven bedrooms, two bathrooms, a classroom and a kitchen-and just happens to be made from a recycled strip club. According to the organization's latest annual report, Pregracke has made it his mission in 2015 to remove 500,000 more pounds of trash. If you'd like to help achieve this goal, visit his website to learn how to help: LivingLandsAndWaters.org/Get-Involved/. Summary: Mike Rowe meets Chad Pregracke, the founder of Living Lands & Waters. The nonprofit has collected 8.4 million pounds of trash from U.S. waterways. Pregracke was named the 2013 CNN Hero of the Year.
false
[]
[]
null
null
null
null
2021
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
kane-etal-2020-nubia
https://aclanthology.org/2020.evalnlgeval-1.4
NUBIA: NeUral Based Interchangeability Assessor for Text Generation
We present NUBIA, a methodology to build automatic evaluation metrics for text generation using only machine learning models as core components. A typical NUBIA model is composed of three modules: a neural feature extractor, an aggregator and a calibrator. We demonstrate an implementation of NUBIA showing competitive performance with stateof-the art metrics used to evaluate machine translation and state-of-the art results for image captions quality evaluation. In addition to strong performance, NUBIA models have the advantage of being modular and improve in synergy with advances in text generation models.
false
[]
[]
null
null
null
null
2020
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
florian-yarowsky-1999-dynamic
https://aclanthology.org/P99-1022
Dynamic Nonlocal Language Modeling via Hierarchical Topic-Based Adaptation
This paper presents a novel method of generating and applying hierarchical, dynamic topic-based language models. It proposes and evaluates new cluster generation, hierarchical smoothing and adaptive topic-probability estimation techniques. These combined models help capture long-distance lexical dependencies. °Experiments on the Broadcast News corpus show significant improvement in perplexity (10.5% overall and 33.5% on target vocabulary).
false
[]
[]
null
null
null
The research reported here was sponsored by National Science Foundation Grant IRI-9618874. The authors would like to thank Eric Brill, Eugene Charniak, Ciprian Chelba, Fred Jelinek, Sanjeev Khudanpur, Lidia Mangu and Jun Wu for suggestions and feedback during the progress of this work, and Andreas Stolcke for use of his hierarchical clustering tools as a basis for some of the clustering software developed here.
1999
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
coling-2008-coling-2008
https://aclanthology.org/C08-3000
Coling 2008: Companion volume: Demonstrations
null
false
[]
[]
null
null
null
null
2008
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
zhang-etal-2018-neural-coreference
https://aclanthology.org/P18-2017
Neural Coreference Resolution with Deep Biaffine Attention by Joint Mention Detection and Mention Clustering
Coreference resolution aims to identify in a text all mentions that refer to the same real-world entity. The state-of-the-art endto-end neural coreference model considers all text spans in a document as potential mentions and learns to link an antecedent for each possible mention. In this paper, we propose to improve the end-toend coreference resolution system by (1) using a biaffine attention model to get antecedent scores for each possible mention, and (2) jointly optimizing the mention detection accuracy and the mention clustering log-likelihood given the mention cluster labels. Our model achieves the stateof-the-art performance on the CoNLL-2012 Shared Task English test set.
false
[]
[]
null
null
null
We thank Kenton Lee and three anonymous reviewers for their helpful discussion and feedback.
2018
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
litman-forbes-riley-2004-predicting
https://aclanthology.org/P04-1045
Predicting Student Emotions in Computer-Human Tutoring Dialogues
We examine the utility of speech and lexical features for predicting student emotions in computerhuman spoken tutoring dialogues. We first annotate student turns for negative, neutral, positive and mixed emotions. We then extract acoustic-prosodic features from the speech signal, and lexical items from the transcribed or recognized speech. We compare the results of machine learning experiments using these features alone or in combination to predict various categorizations of the annotated student emotions. Our best results yield a 19-36% relative improvement in error reduction over a baseline. Finally, we compare our results with emotion prediction in human-human tutoring dialogues.
true
[]
[]
Quality Education
null
null
This research is supported by NSF Grants 9720359 & 0328431. Thanks to the Why2-Atlas team and S. Silliman for system design and data collection.
2004
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
zhao-etal-2019-embedding
https://aclanthology.org/D19-1669
Embedding Lexical Features via Tensor Decomposition for Small Sample Humor Recognition
We propose a novel tensor embedding method that can effectively extract lexical features for humor recognition. Specifically, we use wordword co-occurrence to encode the contextual content of documents, and then decompose the tensor to get corresponding vector representations. We show that this simple method can capture features of lexical humor effectively for continuous humor recognition. In particular, we achieve a distance of 0.887 on a global humor ranking task, comparable to the top performing systems from SemEval 2017 Task 6B (Potash et al., 2017) but without the need for any external training corpus. In addition, we further show that this approach is also beneficial for small sample humor recognition tasks through a semi-supervised label propagation procedure, which achieves about 0.7 accuracy on the 16000 One-Liners (Mihalcea and Strapparava, 2005) and Pun of the Day (Yang et al., 2015) humour classification datasets using only 10% of known labels. * Zhenjie Zhao and Andrew Cattle contributed equally to this work. † E.
false
[]
[]
null
null
null
null
2019
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
saxon-etal-2021-modeling
https://aclanthology.org/2021.emnlp-main.153
Modeling Disclosive Transparency in NLP Application Descriptions
Broader disclosive transparency-truth and clarity in communication regarding the function of AI systems-is widely considered desirable. Unfortunately, it is a nebulous concept, difficult to both define and quantify. This is problematic, as previous work has demonstrated possible trade-offs and negative consequences to disclosive transparency, such as a confusion effect, where "too much information" clouds a reader's understanding of what a system description means. Disclosive transparency's subjective nature has rendered deep study into these problems and their remedies difficult. To improve this state of affairs, We introduce neural language model-based probabilistic metrics to directly model disclosive transparency, and demonstrate that they correlate with user and expert opinions of system transparency, making them a valid objective proxy. Finally, we demonstrate the use of these metrics in a pilot study quantifying the relationships between transparency, confusion, and user perceptions in a corpus of real NLP system descriptions.
false
[]
[]
null
null
null
This work was supported in part by the National Science Foundation Graduate Research Fellowship under Grant No. 1650114. We would also like to
2021
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
udagawa-etal-2020-linguistic
https://aclanthology.org/2020.findings-emnlp.67
A Linguistic Analysis of Visually Grounded Dialogues Based on Spatial Expressions
Recent models achieve promising results in visually grounded dialogues. However, existing datasets often contain undesirable biases and lack sophisticated linguistic analyses, which make it difficult to understand how well current models recognize their precise linguistic structures. To address this problem, we make two design choices: first, we focus on OneCommon Corpus (Udagawa and Aizawa, 2019, 2020), a simple yet challenging common grounding dataset which contains minimal bias by design. Second, we analyze their linguistic structures based on spatial expressions and provide comprehensive and reliable annotation for 600 dialogues. We show that our annotation captures important linguistic structures including predicate-argument structure, modification and ellipsis. In our experiments, we assess the model's understanding of these structures through reference resolution. We demonstrate that our annotation can reveal both the strengths and weaknesses of baseline models in essential levels of detail. Overall, we propose a novel framework and resource for investigating fine-grained language understanding in visually grounded dialogues.
false
[]
[]
null
null
null
This work was supported by JSPS KAKENHI Grant Number 18H03297 and NEDO SIP-2 "Bigdata and AI-enabled Cyberspace Technologies." We also thank the anonymous reviewers for their valuable suggestions and comments.
2020
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
sazzed-2021-hybrid
https://aclanthology.org/2021.ranlp-1.144
A Hybrid Approach of Opinion Mining and Comparative Linguistic Analysis of Restaurant Reviews
The existing research on sentiment analysis mainly utilized data curated in limited geographical regions and demography (e.g., USA, UK, China) due to commercial interest and availability of review data. Since the user's attitudes and preferences can be affected by numerous sociocultural factors and demographic characteristics, it is necessary to have annotated review datasets belong to various demography. In this work, we first construct a review dataset BanglaRestaurant that contains over 2300 customer reviews towards a number of Bangladeshi restaurants. Then, we present a hybrid methodology that yields improvement over the best performing lexicon-based and machine learning (ML) based classifier without using any labeled data. Finally, we investigate how the demography (i.e., geography and nativeness in English) of users affect the linguistic characteristics of the reviews by contrasting two datasets, BanglaRestaurant and Yelp. The comparative results demonstrate the efficacy of the proposed hybrid approach. The data analysis reveals that demography plays an influential role in the linguistic aspects of reviews.
false
[]
[]
null
null
null
The author likes to thank Md. Samiul Basir Tasin and MD Shafin Islam Rudro for collecting BanglaRestaurant review data. The conference registration fee was supported by the ISAB VISA Scholarship of Old Dominion University.
2021
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
liu-etal-2021-quadrupletbert
https://aclanthology.org/2021.naacl-main.292
QuadrupletBERT: An Efficient Model For Embedding-Based Large-Scale Retrieval
The embedding-based large-scale querydocument retrieval problem is a hot topic in the information retrieval (IR) field. Considering that pre-trained language models like BERT have achieved great success in a wide variety of NLP tasks, we present a Quadru-pletBERT model for effective and efficient retrieval in this paper. Unlike most existing BERT-style retrieval models, which only focus on the ranking phase in retrieval systems, our model makes considerable improvements to the retrieval phase and leverages the distances between simple negative and hard negative instances to obtaining better embeddings. Experimental results demonstrate that our QuadrupletBERT achieves state-of-the-art results in embedding-based large-scale retrieval tasks.
false
[]
[]
null
null
null
null
2021
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
ircing-etal-2017-combining
https://aclanthology.org/W17-5021
Combining Textual and Speech Features in the NLI Task Using State-of-the-Art Machine Learning Techniques
We summarize the involvement of our CEMI team in the "NLI Shared Task 2017", which deals with both textual and speech input data. We submitted the results achieved by using three different system architectures; each of them combines multiple supervised learning models trained on various feature sets. As expected, better results are achieved with the systems that use both the textual data and the spoken responses. Combining the input data of two different modalities led to a rather dramatic improvement in classification performance. Our best performing method is based on a set of feed-forward neural networks whose hidden-layer outputs are combined together using a softmax layer. We achieved a macro-averaged F1 score of 0.9257 on the evaluation (unseen) test set and our team placed first in the main task together with other three teams.
false
[]
[]
null
null
null
null
2017
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
richardson-kuhn-2012-light
https://aclanthology.org/C12-2098
Light Textual Inference for Semantic Parsing
There has been a lot of recent interest in Semantic Parsing, centering on using data-driven techniques for mapping natural language to full semantic representations (Mooney, 2007). One particular focus has been on learning with ambiguous supervision (Chen and Mooney, 2008; Kim and Mooney, 2012), where the goal is to model language learning within broader perceptual contexts (Mooney, 2008). We look at learning light inference patterns for Semantic Parsing within this paradigm, focusing on detecting speaker commitments about events under discussion (Nairn et al., 2006; Karttunen, 2012). We adapt PCFG induction techniques (Börschinger et al., 2011; Johnson et al., 2012) for learning inference using event polarity and context as supervision, and demonstrate the effectiveness of our approach on a modified portion of the Grounded World corpus (Bordes et al., 2010).
false
[]
[]
null
null
null
This work was funded by the Deutsche Forschungsgemeinschaft (DFG) on the project SFB 732, "Incremental Specification in Context". We thank Sina Zarriess for useful suggestions and discussions, and Annie Zaenen and Cleo Condoravdi for earlier discussions about the overall idea and method.
2012
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
jin-aletras-2020-complaint
https://aclanthology.org/2020.coling-main.157
Complaint Identification in Social Media with Transformer Networks
Complaining is a speech act extensively used by humans to communicate a negative inconsistency between reality and expectations. Previous work on automatically identifying complaints in social media has focused on using feature-based and task-specific neural network models. Adapting state-of-the-art pre-trained neural language models and their combinations with other linguistic information from topics or sentiment for complaint prediction has yet to be explored. In this paper, we evaluate a battery of neural models underpinned by transformer networks which we subsequently combine with linguistic information. Experiments on a publicly available data set of complaints demonstrate that our models outperform previous state-of-the-art methods by a large margin achieving a macro F1 up to 87.
false
[]
[]
null
null
null
Nikolaos Aletras is supported by ESRC grant ES/T012714/1.
2020
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
bojar-etal-2012-joy
http://www.lrec-conf.org/proceedings/lrec2012/pdf/645_Paper.pdf
The Joy of Parallelism with CzEng 1.0
CzEng 1.0 is an updated release of our Czech-English parallel corpus, freely available for non-commercial research or educational purposes. In this release, we approximately doubled the corpus size, reaching 15 million sentence pairs (about 200 million tokens per language). More importantly, we carefully filtered the data to reduce the amount of non-matching sentence pairs. CzEng 1.0 is automatically aligned at the level of sentences as well as words. We provide not only the plain text representation, but also automatic morphological tags, surface syntactic as well as deep syntactic dependency parse trees and automatic co-reference links in both English and Czech. This paper describes key properties of the released resource including the distribution of text domains, the corpus data formats, and a toolkit to handle the provided rich annotation. We also summarize the procedure of the rich annotation (incl. co-reference resolution) and of the automatic filtering. Finally, we provide some suggestions on exploiting such an automatically annotated sentence-parallel corpus.
false
[]
[]
null
null
null
The work on this project was supported by the project EuroMatrixPlus (FP7-ICT-2007-3-231720 of the EU and 7E09003+7E11051 of the Czech Republic), Czech Science Foundation grants P406/10/P259 and 201/09/H057, GAUK 4226/2011, 116310, and the FAUST project (FP7-ICT-2009-4-247762 of the EU and 7E11041 of the Czech Republic). This work has been using language resources developed and/or stored and/or distributed by the LINDAT-Clarin project of the Ministry of Education of the Czech Republic (project LM2010013).
2012
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
vilain-2004-building
http://www.lrec-conf.org/proceedings/lrec2004/pdf/763.pdf
Building part-of-speech Corpora Through Histogram Hopping
This paper are concerned with lowering the cost of producing training resources for part-of-speech taggers. We focus primarily on the resource needs of unsupervised taggers, as these can be trained with simpler resources than their supervised counterparts. We introduce histogram hopping, a new approach for developing the central training resources of unsupervised taggers, and describe a simple annotation prototype that implements the approach. We then discuss the applicability of histogram hopping to the development of resources for supervised taggers. Finally, we report on a preliminary pilot study for French that validates this work.
false
[]
[]
null
null
null
null
2004
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
zhou-etal-2020-improving-candidate
https://aclanthology.org/2020.tacl-1.8
Improving Candidate Generation for Low-resource Cross-lingual Entity Linking
Cross-lingual entity linking (XEL) is the task of finding referents in a target-language knowledge base (KB) for mentions extracted from source-language texts. The first step of (X)EL is candidate generation, which retrieves a list of plausible candidate entities from the target-language KB for each mention. Approaches based on resources from Wikipedia have proven successful in the realm of relatively high-resource languages, but these do not extend well to low-resource languages with few, if any, Wikipedia pages. Recently, transfer learning methods have been shown to reduce the demand for resources in the lowresource languages by utilizing resources in closely related languages, but the performance still lags far behind their high-resource counterparts. In this paper, we first assess the problems faced by current entity candidate generation methods for low-resource XEL, then propose three improvements that (1) reduce the disconnect between entity mentions and KB entries, and (2) improve the robustness of the model to low-resource scenarios. The methods are simple, but effective: We experiment with our approach on seven XEL datasets and find that they yield an average gain of 16.9% in TOP-30 gold candidate recall, compared with state-of-the-art baselines. Our improved model also yields an average gain of 7.9% in in-KB accuracy of end-to-end XEL. 1
false
[]
[]
null
null
null
We would like to thank Radu Florian and the anonymous reviewers for their useful feedback. This material is based on work supported in part by the Defense Advanced Research Projects Agency Information Innovation Office (I2O) Low Resource Languages for Emergent Incidents (LORELEI) program under contract no. HR0011-15-C0114. The views and conclusions contained in this document are those of the authors and should not be interpreted as representing the official policies, either expressed or implied, of the U.S. Government. The U.S. Government is authorized to reproduce and distribute reprints for Government purposes notwithstanding any copyright notation here on. Shruti Rijhwani is supported by a Bloomberg Data Science Ph.D. Fellowship.
2020
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
john-vechtomova-2017-uwat
https://aclanthology.org/W17-5235
UWat-Emote at EmoInt-2017: Emotion Intensity Detection using Affect Clues, Sentiment Polarity and Word Embeddings
This paper describes the UWaterloo affect prediction system developed for EmoInt-2017. We delve into our feature selection approach for affect intensity, affect presence, sentiment intensity and sentiment presence lexica alongside pretrained word embeddings, which are utilized to extract emotion intensity signals from tweets in an ensemble learning approach. The system employs emotion specific model training, and utilizes distinct models for each of the emotion corpora in isolation. Our system utilizes gradient boosted regression as the primary learning technique to predict the final emotion intensities.
false
[]
[]
null
null
null
We would like to acknowledge the organizers of this shared task, Saif M. Mohammad and Felipe Bravo-Marquez for their support.We would also like to thank Saif M. Mohammad and Pierre Charron for permitting access to the NRC emotion and sentiment lexicons for this task.
2017
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
crego-marino-2006-integration
https://aclanthology.org/2006.amta-papers.4
Integration of POStag-based Source Reordering into SMT Decoding by an Extended Search Graph
This paper presents a reordering framework for statistical machine translation (SMT) where source-side reorderings are integrated into SMT decoding, allowing for a highly constrained reordered search graph. The monotone search is extended by means of a set of reordering patterns (linguistically motivated rewrite patterns). Patterns are automatically learnt in training from word-to-word alignments and source-side Part-Of-Speech (POS) tags. Traversing the extended search graph, the decoder evaluates every hypothesis making use of a group of widely used SMT models and helped by an additional Ngram language model of sourceside POS tags. Experiments are reported on the Euparl task (Spanish-to-English and English-to-Spanish). Results are presented regarding translation accuracy (using human and automatic evaluations) and computational efficiency, showing significant improvements in translation quality for both translation directions at a very low computational cost.
false
[]
[]
null
null
null
This work has been partially funded by the European Union under the integrated project TC-STAR -Technology and Corpora for Speech to Speech Translation -(IST-2002-FP6-506738, http://www.tc-star.org), and the Universitat Politècnica de Catalunya under UPC-RECERCA grant.
2006
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
badr-etal-2008-segmentation
https://aclanthology.org/P08-2039
Segmentation for English-to-Arabic Statistical Machine Translation
In this paper, we report on a set of initial results for English-to-Arabic Statistical Machine Translation (SMT). We show that morphological decomposition of the Arabic source is beneficial, especially for smaller-size corpora, and investigate different recombination techniques. We also report on the use of Factored Translation Models for Englishto-Arabic translation.
false
[]
[]
null
null
null
We would like to thank Ali Mohammad, Michael Collins and Stephanie Seneff for their valuable comments.
2008
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
gaiffe-romary-1997-constraints
https://aclanthology.org/W97-1413
Constraints on the Use of Language, Gesture and Speech for Multimodal Dialogues
In the domain of natural language understanding and more precisely manmachine dialogue design, there are usually two trends of research which seem to be rather differentiated. On the one hand, many studies have tackled the problem of interpreting spatial references expressed in verbal utterances, focusing in particular on the different geometric or functionnal constraints which are bound to the existance of a "source" (or site) element in relation to which a 'target" is being situated. Such studies are usually based upon fine grained linguistic descriptions for different languages (Vandeloise, 1986) . On the other hand, the problem raised by the integration of a gestural mode within classical NL interfaces has yielded some specific research about the association of demonstrative or deictic Nps together with designations, as initited by Bolt some two decades ago (cf. Thorisson et alii, 1992; Bellalem and Romary, 1995) . Our aim in this paper is to show that the different phenomena described in the context of spatial reference or multimodal interaction should not necessarily be considered as two independant issues, but should rather be analysed in a unified way to account for the fact that they are both based on linguistic and perceptual data. As a matter of fact, if we consider a situation of man-machine dialogue where the user is presented with a graphical representation of his task, it is clear that, given a certain informational content he wants to convey, h e will essentially choose a referring mode which seems most relevant in the current communicative situation. For example, if we consider a graphical situation such as that described in figure 1.1, he may either use the black triangle, this triangle (+ pointing gesture), the leftmost triangle to refer to the left most object, and it would be quite annoying to consider these different expressions as corresponding to uncomparable referring modes 1.
false
[]
[]
null
null
null
null
1997
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
feng-etal-2021-alpha
https://aclanthology.org/2021.semeval-1.8
Alpha at SemEval-2021 Task 6: Transformer Based Propaganda Classification
This paper describes our system participated in Task 6 of SemEval-2021: this task focuses on multimodal propaganda technique classification and it aims to classify given image and text into 22 classes. In this paper, we propose to use transformer-based (Vaswani et al., 2017) architecture to fuse the clues from both image and text. We explore two branches of techniques including fine-tuning the text pre-trained transformer with extended visual features and fine-tuning the multimodal pre-trained transformers. For the visual features, we experiment with both grid features extracted from ResNet(He et al., 2016) network and salient region features from a pretrained object detector. Among the pre-trained multimodal transformers, we choose ERNIE-ViL (Yu et al., 2020), a two-steam crossattended transformers model pre-trained on large-scale image-caption aligned data. Finetuning ERNIE-ViL for our task produces a better performance due to general joint multimodal representation for text and image learned by ERNIE-ViL. Besides, as the distribution of the classification labels is extremely unbalanced, we also make a further attempt on the loss function and the experiment results show that focal loss would perform better than cross-entropy loss. Lastly, we ranked first place at sub-task C in the final competition. * indicates equal contribution.
true
[]
[]
Peace, Justice and Strong Institutions
null
null
null
2021
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
true
false
pettersson-megyesi-2019-matching
https://aclanthology.org/W19-6126
Matching Keys and Encrypted Manuscripts
Historical cryptology is the study of historical encrypted messages aiming at their decryption by analyzing the mathematical, linguistic and other coding patterns and their historical context. In libraries and archives we can find quite a lot of ciphers, as well as keys describing the method used to transform the plaintext message into a ciphertext. In this paper, we present work on automatically mapping keys to ciphers to reconstruct the original plaintext message, and use language models generated from historical texts to guess the underlying plaintext language.
false
[]
[]
null
null
null
This work has been supported by the Swedish Research Council, grant 2018-06074: DECRYPT -Decryption of historical manuscripts.
2019
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
bjorkelund-nugues-2011-exploring
https://aclanthology.org/W11-1905
Exploring Lexicalized Features for Coreference Resolution
In this paper, we describe a coreference solver based on the extensive use of lexical features and features extracted from dependency graphs of the sentences. The solver uses Soon et al. (2001)'s classical resolution algorithm based on a pairwise classification of the mentions. We applied this solver to the closed track of the CoNLL 2011 shared task (Pradhan et al., 2011). We carried out a systematic optimization of the feature set using cross-validation that led us to retain 24 features. Using this set, we reached a MUC score of 58.61 on the test set of the shared task. We analyzed the impact of the features on the development set and we show the importance of lexicalization as well as of properties related to dependency links in coreference resolution.
false
[]
[]
null
null
null
This research was supported by Vetenskapsrådet, the Swedish research council, under grant 621-2010-4800.
2011
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
richardson-etal-2001-overcoming
https://aclanthology.org/W01-1402
Overcoming the customization bottleneck using example-based MT
We describe MSR-MT, a large-scale hybrid machine translation system under development for several language pairs. This system's ability to acquire its primary translation knowledge automatically by parsing a bilingual corpus of hundreds of thousands of sentence pairs and aligning resulting logical forms demonstrates true promise for overcoming the so-called MT customization bottleneck. Trained on English and Spanish technical prose, a blind evaluation shows that MSR-MT's integration of rule-based parsers, example based processing, and statistical techniques produces translations whose quality exceeds that of uncustomized commercial MT systems in this domain.
false
[]
[]
null
null
null
We would like to acknowledge the efforts of the MSR NLP group in carrying out this work.
2001
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
manino-etal-2022-systematicity
https://aclanthology.org/2022.findings-acl.185
Systematicity, Compositionality and Transitivity of Deep NLP Models: a Metamorphic Testing Perspective
Metamorphic testing has recently been used to check the safety of neural NLP models. Its main advantage is that it does not rely on a ground truth to generate test cases. However, existing studies are mostly concerned with robustness-like metamorphic relations, limiting the scope of linguistic properties they can test. We propose three new classes of metamorphic relations, which address the properties of systematicity, compositionality and transitivity. Unlike robustness, our relations are defined over multiple source inputs, thus increasing the number of test cases that we can produce by a polynomial factor. With them, we test the internal consistency of state-of-theart NLP models, and show that they do not always behave according to their expected linguistic properties. Lastly, we introduce a novel graphical notation that efficiently summarises the inner structure of metamorphic relations.
false
[]
[]
null
null
null
The work is funded by the EPSRC grant EP/T026995/1 entitled "EnnCore: End-to-End Conceptual Guarding of Neural Architectures" under Security for all in an AI enabled society.
2022
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
zoltan-1969-problems
https://aclanthology.org/C69-5901
Some Problems of Word-Formation Within the Framework of a Generative Grammar
Word-formation has not yet received due attention in ~enerative grammars, probably becal~se it is an interim problem between that of the more-or-less clearly established ~orpho-phonological possibilities and the problem of the lexicon, which h~s not yet been worked out (re~ardin~ wor@formation see the productive attempts of Chomsky, 7,o Worth, ~,otsch, Volotskaia, Zimmer). ~;y intention is to examine word-formation from a 6enerative approach, i.e. to trace the possibilities of ~eneratia c derivatives.l shall base my attempLs on e~amples drawn from word-form~tion in liuncarian , a lanouaoe excep
false
[]
[]
null
null
null
null
1969
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
evang-2020-configurable
https://aclanthology.org/2020.udw-1.10
Configurable Dependency Tree Extraction from CCG Derivations
We revisit the problem of extracting dependency structures from the derivation structures of Combinatory Categorial Grammar (CCG). Previous approaches are often restricted to a narrow subset of CCG or support only one flavor of dependency tree. Our approach is more general and easily configurable, so that multiple styles of dependency tree can be obtained. In an initial case study, we show promising results for converting English, German, Italian, and Dutch CCG derivations from the Parallel Meaning Bank into (unlabeled) UD-style dependency trees.
false
[]
[]
null
null
null
The author would like to thank the anonymous reviewers for helpful feedback. This research was carried out within the TreeGraSP project, funded by a Consolidator Grant of the European Research Council (ERC).
2020
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
kis-etal-2004-new
http://www.lrec-conf.org/proceedings/lrec2004/pdf/441.pdf
A New Approach to the Corpus-based Statistical Investigation of Hungarian Multi-word Lexemes
We apply statistical methods to perform automatic extraction of Hungarian collocations from corpora. Due to the complexity of Hungarian morphology, a complex resource preparation tool chain has been developed. This tool chain implements a reusable and, in principle, language independent framework. In the first part, the paper describes the tool chain itself, then, in the second part, an experiment using this framework. The experiment deals with the extraction of <verb+noun+casemark> patterns from the corpus as collocation candidates, in order to compare results to an experiment on Dutch V + PP patterns (Villada, 2004). Statistical processing on this dataset provided interesting observations, briefly explained in the evaluation section. We conclude by providing a summary of further steps required to improve the extraction process. This is not restricted to improvements in the resource preparation for statistical processing, but a proposal to use nonstatistical means as well, thus acquiring an efficient blend of different methods.
false
[]
[]
null
null
null
This work has been carried out in parallel with similar work on Dutch corpora by a joint Dutch-Hungarian research group supported by NWO-OTKA under grant number 048.011.040.
2004
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
babych-etal-2007-dynamic
https://aclanthology.org/2007.tc-1.3
A dynamic dictionary for discovering indirect translation equivalents
We present the design and evaluation of a novel software application intended to help translators with rendering problematic expressions from the general lexicon. It does this dynamically by first generalising the problem expression in the source language and then searching for possible translations in a large comparable corpus. These candidate solutions are ranked and presented to the user. The method relies on measures of distributional similarity and on bilingual dictionaries. It outperforms established techniques for extracting translation equivalents from parallel corpora.
false
[]
[]
null
null
null
We would like to thank the professional translators who kindly participated in our evaluation trials. This work was supported by EPSRC grant EP/C005902/1 and was conducted jointly with Paul Rayson. Olga Moudraya and Scott Piao of Lancaster University InfoLab.
2007
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
gorrell-etal-2013-finding
https://aclanthology.org/W13-5102
Finding Negative Symptoms of Schizophrenia in Patient Records
This paper reports the automatic extraction of eleven negative symptoms of schizophrenia from patient medical records. The task offers a range of difficulties depending on the consistency and complexity with which mental health professionals describe each. In order to reduce the cost of system development, rapid prototypes are built with minimal adaptation and configuration of existing software, and additional training data is obtained by annotating automatically extracted symptoms for which the system has low confidence. The system was further improved by the addition of a manually engineered rule based approach. Rule-based and machine learning approaches are combined in various ways to achieve the optimal result for each symptom. Precisions in the range of 0.8 to 0.99 have been obtained.
true
[]
[]
Good Health and Well-Being
null
null
null
2013
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
false
akiba-etal-2008-statistical
https://aclanthology.org/I08-2104
Statistical Machine Translation based Passage Retrieval for Cross-Lingual Question Answering
In this paper, we propose a novel approach for Cross-Lingual Question Answering (CLQA). In the proposed method, the statistical machine translation (SMT) is deeply incorporated into the question answering process, instead of using it as the pre-processing of the mono-lingual QA process as in the previous work. The proposed method can be considered as exploiting the SMT-based passage retrieval for CLQA task. We applied our method to the English-to-Japanese CLQA system and evaluated the performance by using NTCIR CLQA 1 and 2 test collections. The result showed that the proposed method outperformed the previous pre-translation approach.
false
[]
[]
null
null
null
null
2008
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
pasca-harabagiu-2001-answer
https://aclanthology.org/W01-1206
Answer Mining from On-Line Documents
Mining the answer of a natural language open-domain question in a large collection of on-line documents is made possible by the recognition of the expected answer type in relevant text passages. If the technology of retrieving texts where the answer might be found is well developed, few studies have been devoted to the recognition of the answer type. This paper presents a unified model of answer types for open-domain Question/Answering that enables the discovery of exact answers. The evaluation of the model, performed on real-world questions, considers both the correctness and the coverage of the answer types as well as their contribution to answer precision.
false
[]
[]
null
null
null
This research was supported in part by the Advanced Research and Development Activity (ARDA) grant 2001*H238400*000 and by the National Science Foundation CAREER grant CCR-9983600.
2001
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
chang-etal-1992-statistical
https://aclanthology.org/C92-3139
A Statistical Approach to Machine Aided Translation of Terminology Banks
"l]fis paper reports on a new statistical approach to machine aided translation of terminology bank. The text in the bank is hyphenated and then dissected into roots of 1 to 3 syllables. Both hyphenation and dissection are done with a set of initial probabilities of syllables and roots. The probabilities are repeatedly revised using an EM algorithm. Alter each iteration of hyphenation or dissectioh, the resulting syllables and roots are counted subsequently to yield more precise estimation of probability. The set of roots rapidly converges to a set of most likely roots. Preliminary experhuents have shown promising results. From a terminology bank of more than 4,000 terms, the algorithm extracts 223 general and chemical roots, of which 91% are actually roots. The algoritlun dissects a word into roots with aromld 86% hit rate. The set of roots and their "hand-translation are then used iu a compositional translation of the terminology bank. One can expect the translation of terminology bank using this approach to be more cost-effective, consistent, and with a better closure.
false
[]
[]
null
null
null
This research was supported by the National Science Council, Taiwan, under Contracts NSC 81-0408-E007-13 and -529.
1992
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
handler-oconnor-2019-query
https://aclanthology.org/D19-1612
Query-focused Sentence Compression in Linear Time
Search applications often display shortened sentences which must contain certain query terms and must fit within the space constraints of a user interface. This work introduces a new transition-based sentence compression technique developed for such settings. Our query-focused method constructs length and lexically constrained compressions in linear time, by growing a subgraph in the dependency parse of a sentence. This theoretically efficient approach achieves an 11x empirical speedup over baseline ILP methods, while better reconstructing gold constrained shortenings. Such speedups help query-focused applications, because users are measurably hindered by interface lags. Additionally, our technique does not require an ILP solver or a GPU.
false
[]
[]
null
null
null
Thanks to Javier Burroni and Nick Eubank for suggesting ways to optimize and measure performance of Python code. Thanks to Jeffrey Flanigan, Katie Keith and the UMass NLP reading group for feedback. This work was partially supported by IIS-1814955.
2019
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
viegas-etal-1998-computational-lexical
https://aclanthology.org/P98-2216
The Computational Lexical Semantics of Syntagmatic Expressions
In this paper, we address the issue of syntagmatic expressions from a computational lexical semantic perspective. From a representational viewpoint, we argue for a hybrid approach combining linguistic and conceptual paradigms, in order to account for the continuum we find in natural languages from free combining words to frozen expressions. In particular, we focus on the place of lexical and semantic restricted co-occurrences. From a processing viewpoint, we show how to generate/analyze syntagmatic expressions by using an efficient constraintbased processor, well fitted for a knowledge-driven approach.
false
[]
[]
null
null
null
This work has been supported in part by DoD under contract number MDA-904-92-C-5189. We would like to thank Pierrette Bouillon, L~o Wanner and R~mi Zajac for helpful discussions and the anonymous reviewers for their useful comments.
1998
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
tanveer-ture-2018-syntaviz
https://aclanthology.org/D18-2001
SyntaViz: Visualizing Voice Queries through a Syntax-Driven Hierarchical Ontology
This paper describes SYNTAVIZ, a visualization interface specifically designed for analyzing natural-language queries that were created by users of a voice-enabled product. SYN-TAVIZ provides a platform for browsing the ontology of user queries from a syntax-driven perspective, providing quick access to highimpact failure points of the existing intent understanding system and evidence for datadriven decisions in the development cycle. A case study on Xfinity X1 (a voice-enabled entertainment platform from Comcast) reveals that SYNTAVIZ helps developers identify multiple action items in a short amount of time without any special training. SYNTAVIZ has been open-sourced for the benefit of the community.
false
[]
[]
null
null
null
null
2018
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
pajas-stepanek-2009-system
https://aclanthology.org/P09-4009
System for Querying Syntactically Annotated Corpora
This paper presents a system for querying treebanks. The system consists of a powerful query language with natural support for cross-layer queries, a client interface with a graphical query builder and visualizer of the results, a command-line client interface, and two substitutable query engines: a very efficient engine using a relational database (suitable for large static data), and a slower, but paralel-computing enabled, engine operating on treebank files (suitable for "live" data).
false
[]
[]
null
null
null
This paper as well as the development of the system is supported by the grant Information Society of GA AVČR under contract 1ET101120503 and by the grant GAUK No. 22908.
2009
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
shwartz-etal-2017-hypernyms
https://aclanthology.org/E17-1007
Hypernyms under Siege: Linguistically-motivated Artillery for Hypernymy Detection
The fundamental role of hypernymy in NLP has motivated the development of many methods for the automatic identification of this relation, most of which rely on word distribution. We investigate an extensive number of such unsupervised measures, using several distributional semantic models that differ by context type and feature weighting. We analyze the performance of the different methods based on their linguistic motivation. Comparison to the state-of-the-art supervised methods shows that while supervised methods generally outperform the unsupervised ones, the former are sensitive to the distribution of training instances, hurting their reliability. Being based on general linguistic hypotheses and independent from training data, unsupervised measures are more robust, and therefore are still useful artillery for hypernymy detection.
false
[]
[]
null
null
null
The authors would like to thank Ido Dagan, Alessandro Lenci, and Yuji Matsumoto for their help and advice. Vered Shwartz is partially supported by an Intel ICRI-CI grant, the Israel Science Foundation grant 880/12, and the German Research Foundation through the German-Israeli Project Cooperation (DIP, grant DA 1600/1-1). Enrico Santus is partially supported by HK PhD Fellowship Scheme under PF12-13656.
2017
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
mendes-etal-2010-named
http://www.lrec-conf.org/proceedings/lrec2010/pdf/97_Paper.pdf
Named Entity Recognition in Questions: Towards a Golden Collection
Named Entity Recognition (NER) plays a relevant role in several Natural Language Processing tasks. Question-Answering (QA) is an example of such, since answers are frequently named entities in agreement with the semantic category expected by a given question. In this context, the recognition of named entities is usually applied in free text data. NER in natural language questions can also aid QA and, thus, should not be disregarded. Nevertheless, it has not yet been given the necessary importance. In this paper, we approach the identification and classification of named entities in natural language questions. We hypothesize that NER results can benefit with the inclusion of previously labeled questions in the training corpus. We present a broad study addressing that hypothesis and focusing, among others, on the balance to be achieved between the amount of free text and questions in order to build a suitable training corpus. This work also contributes by providing a set of nearly 5,500 annotated questions with their named entities, freely available for research purposes.
false
[]
[]
null
null
null
null
2010
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
li-etal-2010-transferring
https://aclanthology.org/2010.amta-papers.26
Transferring Syntactic Relations of Subject-Verb-Object Pattern in Chinese-to-Korean SMT
Since most Korean postpositions signal grammatical functions such as syntactic relations, generation of incorrect Korean postpositions results in producing ungrammatical outputs in machine translations targeting Korean. Chinese and Korean belong to morphosyntactically divergent language pairs, and usually Korean postpositions do not have their counterparts in Chinese. In this paper, we propose a preprocessing method for a statistical MT system that generates more adequate Korean postpositions. We transfer syntactic relations of subject-verb-object patterns in Chinese sentences and enrich them with transferred syntactic relations in order to reduce the morpho-syntactic differences. The effectiveness of our proposed method is measured with lexical units of various granularities. Human evaluation also suggest improvements over previous methods, which are consistent with the result of the automatic evaluation.
false
[]
[]
null
null
null
This work is supported in part by Basic Science Research Program through the National Research Foundation of Korea (NRF) funded by the Ministry of Education, Science and Technology (MEST) (2009-0075211), in part by the BK 21 project in 2010, and in part by the POSTECH Information Research Laboratories (PIRL) project.
2010
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
zhang-etal-2021-textoir
https://aclanthology.org/2021.acl-demo.20
TEXTOIR: An Integrated and Visualized Platform for Text Open Intent Recognition
TEXTOIR is the first integrated and visualized platform for text open intent recognition. It is composed of two main modules: open intent detection and open intent discovery. Each module integrates most of the state-of-the-art algorithms and benchmark intent datasets. It also contains an overall framework connecting the two modules in a pipeline scheme. In addition, this platform has visualized tools for data and model management, training, evaluation and analysis of the performance from different aspects. TEXTOIR provides useful toolkits and convenient visualized interfaces for each sub-module 1 , and designs a framework to implement a complete process to both identify known intents and discover open intents 2 .
false
[]
[]
null
null
null
This work is founded by National Key R&D Program Projects of China (Grant No: 2018YFC1707605). This work is also supported by seed fund of Tsinghua University (Department of Computer Science and Technology)-Siemens Ltd., China Joint Research Center for Industrial Intelligence and Internet of Things. We would like to thank the help from Xin Wang and Huisheng Mao, and constructive feedback from Ting-En Lin on this work.
2021
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
yang-etal-2020-streaming
https://aclanthology.org/2020.emnlp-main.366
A Streaming Approach For Efficient Batched Beam Search
We propose an efficient batching strategy for variable-length decoding on GPU architectures. During decoding, when candidates terminate or are pruned according to heuristics, our streaming approach periodically "refills" the batch before proceeding with a selected subset of candidates. We apply our method to variable-width beam search on a state-of-theart machine translation model. Our method decreases runtime by up to 71% compared to a fixed-width beam search baseline and 17% compared to a variable-width baseline, while matching baselines' BLEU. Finally, experiments show that our method can speed up decoding in other domains, such as semantic and syntactic parsing.
false
[]
[]
null
null
null
We thank Steven Cao, Daniel Fried, Nikita Kitaev, Kevin Lin, Mitchell Stern, Kyle Swanson, Ruiqi Zhong, and the three anonymous reviewers for their helpful comments and feedback, which helped us to greatly improve the paper. This work was supported by Berkeley AI Research, DARPA through the Learning with Less Labeling (LwLL) grant, and the NSF through a fellowship to the first author.
2020
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
yeung-kartsaklis-2021-ccg
https://aclanthology.org/2021.semspace-1.3
A CCG-Based Version of the DisCoCat Framework
While the DisCoCat model (Coecke et al., 2010) has been proved a valuable tool for studying compositional aspects of language at the level of semantics, its strong dependency on pregroup grammars poses important restrictions: first, it prevents large-scale experimentation due to the absence of a pregroup parser; and second, it limits the expressibility of the model to context-free grammars. In this paper we solve these problems by reformulating DisCoCat as a passage from Combinatory Categorial Grammar (CCG) to a category of semantics. We start by showing that standard categorial grammars can be expressed as a biclosed category, where all rules emerge as currying/uncurrying the identity; we then proceed to model permutation-inducing rules by exploiting the symmetry of the compact closed category encoding the word meaning. We provide a proof of concept for our method, converting "Alice in Wonderland" into DisCoCat form, a corpus that we make available to the community.
false
[]
[]
null
null
null
We would like to thank the anonymous reviewers for their useful comments. We are grateful to Steve Clark for his comments on CCG and the useful discussions on the generative power of the formalism. The paper has also greatly benefited from discussions with Alexis Toumi, Vincent Wang, Ian Fan, Harny Wang, Giovanni de Felice, Will Simmons, Konstantinos Meichanetzidis and Bob Coecke, who all have our sincere thanks.
2021
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
pericliev-1984-handling
https://aclanthology.org/P84-1111
Handling Syntactical Ambiguity in Machine Translation
The difficulties to be met with the resolution of syntactical ambiguity in MT can be at least partially overcome by means of preserving the syntactical ambiguity of the source language into the target language. An extensive study of the correspondences between the syntactically ambiguous structures in English and Bulgarian has provided a solid empirical basis in favor of such an approach. Similar results could be expected for other sufficiently related languages as well. The paper concentrates on the linguistic grounds for adopting the approach proposed.
false
[]
[]
null
null
null
null
1984
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
wiren-1987-comparison
https://aclanthology.org/E87-1037
A Comparison of Rule-Invocation Strategies in Context-Free Chart Parsing
Currently several grammatical formalisms converge towards being declarative and towards utilizing context-free phrase-structure grammar as a backbone, e.g. LFG and PATR-II. Typically the processing of these formalisms is organized within a chart-parsing framework. The declarative character of the formalisms makes it important to decide upon an overall optimal control strategy on the part of the processor. In particular, this brings the ruleinvocation strategy into critical focus: to gain maximal processing efficiency, one has to determine the best way of putting the rules to use. The aim of this paper is to provide a survey and a practical comparison of fundamental rule-invocation strategies within context-free chart parsing.
false
[]
[]
null
null
null
I would like to thank Lars Ahrenberg, Nils Dahlb~k, Arne Jbnsson, Magnus Merkel, Ivan Rankin, and an anonymous referee for the very helpful comments they have made on various drafts of this paper. In addition I am indebted to Masaru Tomita for providing me with his test grammars and sentences, and to Martin Kay for comments in connection with my presentation.
1987
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
weller-di-marco-fraser-2020-modeling
https://aclanthology.org/2020.acl-main.389
Modeling Word Formation in English--German Neural Machine Translation
This paper studies strategies to model word formation in NMT using rich linguistic information, namely a word segmentation approach that goes beyond splitting into substrings by considering fusional morphology. Our linguistically sound segmentation is combined with a method for target-side inflection to accommodate modeling word formation. The best system variants employ source-side morphological analysis and model complex target-side words, improving over a standard system.
false
[]
[]
null
null
null
This research was partially funded by LMU Munich's Institutional Strategy LMUexcellent within the framework of the German Excellence Initiative. This project has received funding from the European Research Council (ERC) under the European Union's Horizon 2020 research and innovation programme (grant agreement № 640550). This work was supported by the Dutch Organization for Scientific Research (NWO) VICI Grant nr. 277-89-002.
2020
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
dai-etal-2021-ultra
https://aclanthology.org/2021.acl-long.141
Ultra-Fine Entity Typing with Weak Supervision from a Masked Language Model
Recently, there is an effort to extend finegrained entity typing by using a richer and ultra-fine set of types, and labeling noun phrases including pronouns and nominal nouns instead of just named entity mentions. A key challenge for this ultra-fine entity typing task is that human annotated data are extremely scarce, and the annotation ability of existing distant or weak supervision approaches is very limited. To remedy this problem, in this paper, we propose to obtain training data for ultra-fine entity typing by using a BERT Masked Language Model (MLM). Given a mention in a sentence, our approach constructs an input for the BERT MLM so that it predicts context dependent hypernyms of the mention, which can be used as type labels. Experimental results demonstrate that, with the help of these automatically generated labels, the performance of an ultra-fine entity typing model can be improved substantially. We also show that our approach can be applied to improve traditional fine-grained entity typing after performing simple type mapping.
false
[]
[]
null
null
null
This paper was supported by the NSFC Grant (No. U20B2053) from China, the Early Career Scheme (ECS, No. 26206717), the General Research Fund (GRF, No. 16211520), and the Research Impact Fund (RIF, from the Research Grants Council (RGC) of Hong Kong, with special thanks to the WeChat-HKUST WHAT Lab on Artificial Intelligence Technology.
2021
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
weller-heid-2012-analyzing
http://www.lrec-conf.org/proceedings/lrec2012/pdf/817_Paper.pdf
Analyzing and Aligning German compound nouns
In this paper, we present and evaluate an approach for the compositional alignment of compound nouns using comparable corpora from technical domains. The task of term alignment consists in relating a source language term to its translation in a list of target language terms with the help of a bilingual dictionary. Compound splitting allows to transform a compound into a sequence of components which can be translated separately and then related to multi-word target language terms. We present and evaluate a method for compound splitting, and compare two strategies for term alignment (bag-of-word vs. pattern-based). The simple word-based approach leads to a considerable amount of erroneous alignments, whereas the pattern-based approach reaches a decent precision. We also assess the reasons for alignment failures: in the comparable corpora used for our experiments, a substantial number of terms has no translation in the target language data; furthermore, the non-isomorphic structures of source and target language terms cause alignment failures in many cases.
false
[]
[]
null
null
null
The research leading to these results has received funding from the European Community's Seventh Framework Programme (FP7/2007(FP7/ -2013 under Grant Agreement n. 248005.
2012
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
poesio-vieira-1998-corpus
https://aclanthology.org/J98-2001
A Corpus-based Investigation of Definite Description Use
We present the results of a study of the use of definite descriptions in written texts aimed at assessing the feasibility of annotating corpora with information about definite description interpretation. We ran two experiments, in which subjects were asked to classi~ the uses of definite descriptions in a corpus of 33 newspaper articles, containing a total ofl,412 definite descriptions. We measured the agreement among annotators about the classes assigned to definite descriptions, as well as the agreement about the antecedent assigned to those definites that the annotators classified as being related to an antecedent in the text. The most interesting result of this study from a corpus annotation perspective was the rather low agreement (K = 0.63) that we obtained using versions of Hawkins's and Prince's classification schemes; better results (K = 0.76) were obtained using the simplified scheme proposed by Fraurud that includes only two classes, firstmention and subsequent-mention. The agreement about antecedents was also not complete. These findings raise questions concerning the strategy of evaluating systems for definite description interpretation by comparing their results with a standardized annotation. From a linguistic point of view, the most interesting observations were the great number of discourse-new definites in our corpus (in one of our experiments, about 50% of the definites in the collection were classified as discourse-new, 30% as anaphoric, and 18% as associative~bridging) and the presence of definites that did not seem to require a complete disambiguation. all-or-nothing affair (Bard, Robertson, and Sorace 1996).
false
[]
[]
null
null
null
We wish to thank Jean Carletta for much help both with designing the experiments and with the analysis of the results. We are also grateful to Ellen Bard, Robin Cooper, Kari Fraurud, Janet Hitzeman, Kjetil Strand, and our anonymous reviewers for many helpful comments. Massimo Poesio holds an Advanced Research Fellowship from EPSRC, UK; Renata Vieira is supported by a fellowship from CNPq, Brazil.
1998
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
seonwoo-etal-2021-weakly
https://aclanthology.org/2021.findings-acl.62
Weakly Supervised Pre-Training for Multi-Hop Retriever
In multi-hop QA, answering complex questions entails iterative document retrieval for finding the missing entity of the question. The main steps of this process are sub-question detection, document retrieval for the subquestion, and generation of a new query for the final document retrieval. However, building a dataset that contains complex questions with sub-questions and their corresponding documents requires costly human annotation. To address the issue, we propose a new method for weakly supervised multi-hop retriever pretraining without human efforts. Our method includes 1) a pre-training task for generating vector representations of complex questions, 2) a scalable data generation method that produces the nested structure of question and subquestion as weak supervision for pre-training, and 3) a pre-training model structure based on dense encoders. We conduct experiments to compare the performance of our pre-trained retriever with several state-of-the-art models on end-to-end multi-hop QA as well as document retrieval. The experimental results show that our pre-trained retriever is effective and also robust on limited data and computational resources.
false
[]
[]
null
null
null
This work was partly supported by NAVER Corp. and Institute for Information & communications Technology Planning & Evaluation(IITP) grant funded by the Korean government(MSIT) (No. 2017-0-01780, The technology development for event recognition/relational reasoning and learning knowledge based system for video understanding).
2021
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
barnes-etal-2016-exploring
https://aclanthology.org/C16-1152
Exploring Distributional Representations and Machine Translation for Aspect-based Cross-lingual Sentiment Classification.
Cross-lingual sentiment classification (CLSC) seeks to use resources from a source language in order to detect sentiment and classify text in a target language. Almost all research into CLSC has been carried out at sentence and document level, although this level of granularity is often less useful. This paper explores methods for performing aspect-based cross-lingual sentiment classification (aspect-based CLSC) for under-resourced languages. Given the limited nature of parallel data for under-resourced languages, we would like to make the most of this resource for our task. We compare zero-shot learning, bilingual word embeddings, stacked denoising autoencoder representations and machine translation techniques for aspect-based CLSC. We show that models based on distributed semantics can achieve comparable results to machine translation on aspect-based CLSC. Finally, we give an analysis of the errors found for each method.
false
[]
[]
null
null
null
null
2016
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
van-kuppevelt-1993-intentionality
https://aclanthology.org/W93-0236
Intentionality in a Topical Approach of Discourse Structure
Position paper The alternative to be outlined provides a proposal to solve a central problem in research on discourse structure and discourse coherence, namely, as pointed out by many authors, that of the relationship between linguistic and intentional structure, or, in other words, between subject matter and presentational relations (Mann and Thompson 1988) or informational and intentional relations (Moore and Pollack 1992). As is argued for in Van Kuppevelt (1993), this alternative not only implies uniformity on the structural levels involved, i.e. the linguistic and intentional level, but also on the level of attentional states (Grosz and Sidner 1986). 2 The latter is ruled by the dynamics of topic constitution and topic termination, determining which discourse units are in focus of attention during the development of the discourse. 3 We will see that both linguistic relations and intentions are defined in a uniform way by topic-forming questions in discourse, thereby automatically satisfying the need for a multi-level analysis as is argued for in Moore and Paris (1992), and as is signalled by Dale (this volume), avoiding differences in discourse segmentation between RST analyses and intentional approaches. The central hypothesis underlying this alternative is that the structural coherence in discourse is governed by the discourse-internal process of questioning, consisting of the contextual induction of explicit and/or implicit topic-forming questions. This process gives rise to the phenomenon that the organization of discourse segments (as well as the associated isomorphic structure of intentions) agrees with the internal topic-comment structure, and that in the following specific way: (i) every discourse unit u(D)Tp has associated with it a topic Tp (or, a discourse topic DTp) which is provided by the (set of) topic-forming question(s) Qp that UTp has answered, and (ii), the relation between discourse units u(D) Ti is determined by the relation between the topic-forming questions Qi answered by these discourse units u(D)Ti. 4 Topics are thus context-dependently characterized in terms of questions arising from the preceding discourse. As is elaborated upon in Van Kuppevelt (1991/92) every contextually induced explicit .... or implicit (sub)question Qp that ...... is answered in discourse constitutes a (sub)topic Tp. Tp ts that which ts questioned; an undetermined set of (possibly non-existent) discourse entitles (or a set of ordered n-tuples of such entities in the case of an n-fold question) which needs further
false
[]
[]
null
null
null
null
1993
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
liu-etal-2021-improving-factual
https://aclanthology.org/2021.ecnlp-1.19
Improving Factual Consistency of Abstractive Summarization on Customer Feedback
E-commerce stores collect customer feedback to let sellers learn about customer concerns and enhance customer order experience. Because customer feedback often contains redundant information, a concise summary of the feedback can be generated to help sellers better understand the issues causing customer dissatisfaction. Previous state-of-the-art abstractive text summarization models make two major types of factual errors when producing summaries from customer feedback, which are wrong entity detection (WED) and incorrect product-defect description (IPD). In this work, we introduce a set of methods to enhance the factual consistency of abstractive summarization on customer feedback. We augment the training data with artificially corrupted summaries, and use them as counterparts of the target summaries. We add a contrastive loss term into the training objective so that the model learns to avoid certain factual errors. Evaluation results show that a large portion of WED and IPD errors are alleviated for BART and T5. Furthermore, our approaches do not depend on the structure of the summarization model and thus are generalizable to any abstractive summarization systems.
false
[]
[]
null
null
null
null
2021
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
abnar-etal-2018-experiential
https://aclanthology.org/W18-0107
Experiential, Distributional and Dependency-based Word Embeddings have Complementary Roles in Decoding Brain Activity
We evaluate 8 different word embedding models on their usefulness for predicting the neural activation patterns associated with concrete nouns. The models we consider include an experiential model, based on crowd-sourced association data, several popular neural and distributional models, and a model that reflects the syntactic context of words (based on dependency parses). Our goal is to assess the cognitive plausibility of these various embedding models, and understand how we can further improve our methods for interpreting brain imaging data. We show that neural word embedding models exhibit superior performance on the tasks we consider, beating experiential word representation model. The syntactically informed model gives the overall best performance when predicting brain activation patterns from word embeddings; whereas the GloVe distributional method gives the overall best performance when predicting in the reverse direction (words vectors from brain images). Interestingly, however, the error patterns of these different models are markedly different. This may support the idea that the brain uses different systems for processing different kinds of words. Moreover, we suggest that taking the relative strengths of different embedding models into account will lead to better models of the brain activity associated with words.
true
[]
[]
Good Health and Well-Being
null
null
null
2018
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
false
buechel-etal-2019-time
https://aclanthology.org/D19-5103
A Time Series Analysis of Emotional Loading in Central Bank Statements
We examine the affective content of central bank press statements using emotion analysis. Our focus is on two major international players, the European Central Bank (ECB) and the US Federal Reserve Bank (Fed), covering a time span from 1998 through 2019. We reveal characteristic patterns in the emotional dimensions of valence, arousal, and dominance and find-despite the commonly established attitude that emotional wording in central bank communication should be avoided-a correlation between the state of the economy and particularly the dominance dimension in the press releases under scrutiny and, overall, an impact of the president in office.
false
[]
[]
null
null
null
We would like to thank the anonymous reviewers for their detailed and constructive comments.
2019
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
le-nagard-koehn-2010-aiding
https://aclanthology.org/W10-1737
Aiding Pronoun Translation with Co-Reference Resolution
We propose a method to improve the translation of pronouns by resolving their coreference to prior mentions. We report results using two different co-reference resolution methods and point to remaining challenges.
false
[]
[]
null
null
null
This work was supported by the EuroMatrixPlus project funded by the European Commission (7th Framework Programme).
2010
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
chen-etal-2019-expanding
https://aclanthology.org/W19-7415
Expanding English and Chinese Dictionaries by Wikipedia Titles
This paper introduces our preliminary work in dictionary expansion by adding English and Chinese Wikipedia titles along with their linguistic features. Parts-of-speech of Chinese titles are determined by the majority of heads of their Wikipedia categories. Proper noun detection in English Wikipedia is done by checking the capitalization of the titles in the content of the articles. Title alternatives will be detected beforehand. Chinese proper noun detection is done via interlanguage links and POS. The estimated accuracy of POS determination is 71.67% and the accuracy of proper noun detection is about 83.32%.
false
[]
[]
null
null
null
This research was funded by the Taiwan Ministry of Science and Technology (grant: MOST 106-2221-E-019-072.)
2019
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
bashier-etal-2021-disk
https://aclanthology.org/2021.eacl-main.263
DISK-CSV: Distilling Interpretable Semantic Knowledge with a Class Semantic Vector
Neural networks (NN) applied to natural language processing (NLP) are becoming deeper and more complex, making them increasingly difficult to understand and interpret. Even in applications of limited scope on fixed data, the creation of these complex "black-boxes" creates substantial challenges for debugging, understanding, and generalization. But rapid development in this field has now lead to building more straightforward and interpretable models. We propose a new technique (DISK-CSV) to distill knowledge concurrently from any neural network architecture for text classification, captured as a lightweight interpretable/explainable classifier. Across multiple datasets, our approach achieves better performance than the target black-box. In addition, our approach provides better explanations than existing techniques.
false
[]
[]
null
null
null
We acknowledge support from the Alberta Machine Intelligence Institute (AMII), from the Computing Science Department of the University of Alberta, and the Natural Sciences and Engineering Research Council of Canada (NSERC).
2021
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
belyaev-etal-2021-digitizing
https://aclanthology.org/2021.iwclul-1.7
Digitizing print dictionaries using TEI: The Abaev Dictionary Project
We present the results of a year-long effort to create an electronic version of V. I. Abaev's Historical-etymological dictionary of Ossetic. The aim of the project is twofold: first, to create an English translation of the dictionary; second, to provide it (in both its Russian and English version) with a semantic markup that would make it searchable across multiple types of data and accessible for machine-based processing. Volume 1, whose prelimiary version was completed in 2020, used the TshwaneLex (TLex) platform, which is perfectly adequate for dictionaries with a low to medium level of complexity, and which allows for almost WYSIWYG formatting and simple export into a publishable format. However, due to a number of limitations of TLex, it was necessary to transition to a more flexible and more powerful format. We settled on the Text Encoding Initiative-an XML-based format for the computational representation of published texts, used in a number of digital humanities projects. Using TEI also allowed the project to transition from the proprietary, closed system of TLex to the full range of tools available for XML and related technologies. We discuss the challenges that are faced by such largescale dictionary projects, and the practices that we have adopted in order to avoid common pitfalls.
false
[]
[]
null
null
null
null
2021
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
dybkjaer-dybkjaer-2006-act
http://www.lrec-conf.org/proceedings/lrec2006/pdf/471_pdf.pdf
Act-Topic Patterns for Automatically Checking Dialogue Models
When dialogue models are evaluated today, this is normally done by using some evaluation method to collect data, often involving users interacting with the system model, and then subsequently analysing the collected data. We present a tool called DialogDesigner that enables automatic evaluation performed directly on the dialogue model and that does not require any data collection first. DialogDesigner is a tool in support of rapid design and evaluation of dialogue models. The first version was developed in 2005 and enabled developers to create an electronic dialogue model, get various graphical views of the model, run a Wizard-of-Oz (WOZ) simulation session, and extract different presentations in HTML. The second version includes extensions in terms of support for automatic dialogue model evaluation. Various aspects of dialogue model well-formedness can be automatically checked. Some of the automatic analyses simply perform checks based on the state and transition structure of the dialogue model while the core part are based on act-topic annotation of prompts and transitions in the dialogue model and specification of act-topic patterns. This paper focuses on the version 2 extensions.
false
[]
[]
null
null
null
null
2006
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
zhou-etal-2016-evaluating
https://aclanthology.org/L16-1104
Evaluating a Deterministic Shift-Reduce Neural Parser for Constituent Parsing
Greedy transition-based parsers are appealing for their very fast speed, with reasonably high accuracies. In this paper, we build a fast shift-reduce neural constituent parser by using a neural network to make local decisions. One challenge to the parsing speed is the large hidden and output layer sizes caused by the number of constituent labels and branching options. We speed up the parser by using a hierarchical output layer, inspired by the hierarchical log-bilinear neural language model. In standard WSJ experiments, the neural parser achieves an almost 2.4 time speed up (320 sen/sec) compared to a non-hierarchical baseline without significant accuracy loss (89.06 vs 89.13 F-score).
false
[]
[]
null
null
null
null
2016
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
krahmer-van-der-sluis-2003-new
https://aclanthology.org/W03-2307
A New Model for Generating Multimodal Referring Expressions
null
false
[]
[]
null
null
null
null
2003
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
yang-etal-2014-towards
https://aclanthology.org/W14-4104
Towards Identifying the Resolvability of Threads in MOOCs
One important function of the discussion forums of Massive Open Online Courses (MOOCs) is for students to post problems they are unable to resolve and receive help from their peers and instructors. There are a large proportion of threads that are not resolved to the satisfaction of the students for various reasons. In this paper, we attack this problem by firstly constructing a conceptual model validated using a Structural Equation Modeling technique, which enables us to understand the factors that influence whether a problem thread is satisfactorily resolved. We then demonstrate the robustness of these findings using a predictive model that illustrates how accurately those factors can be used to predict whether a thread is resolved or unresolved. Experiments conducted on one MOOC show that thread resolveability connects closely to our proposed five dimensions and that the predictive ensemble model gives better performance over several baselines.
true
[]
[]
Quality Education
null
null
This research was funded in part by NSF grants IIS-1320064 and OMA-0836012 and funding from Google.
2014
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
li-nenkova-2014-reducing
https://aclanthology.org/W14-4327
Reducing Sparsity Improves the Recognition of Implicit Discourse Relations
The earliest work on automatic detection of implicit discourse relations relied on lexical features. More recently, researchers have demonstrated that syntactic features are superior to lexical features for the task. In this paper we reexamine the two classes of state of the art representations: syntactic production rules and word pair features. In particular, we focus on the need to reduce sparsity in instance representation, demonstrating that different representation choices even for the same class of features may exacerbate sparsity issues and reduce performance. We present results that clearly reveal that lexicalization of the syntactic features is necessary for good performance. We introduce a novel, less sparse, syntactic representation which leads to improvement in discourse relation recognition. Finally, we demonstrate that classifiers trained on different representations, especially lexical ones, behave rather differently and thus could likely be combined in future systems.
false
[]
[]
null
null
null
null
2014
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false